00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 229 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3730 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.091 Fetching changes from the remote Git repository 00:00:00.093 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.153 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.166 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.178 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.178 > git config core.sparsecheckout # timeout=10 00:00:04.190 > git read-tree -mu HEAD # timeout=10 00:00:04.207 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.235 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.235 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.314 [Pipeline] Start of Pipeline 00:00:04.328 [Pipeline] library 00:00:04.330 Loading library shm_lib@master 00:00:04.330 Library shm_lib@master is cached. Copying from home. 00:00:04.346 [Pipeline] node 00:00:04.367 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.369 [Pipeline] { 00:00:04.378 [Pipeline] catchError 00:00:04.379 [Pipeline] { 00:00:04.393 [Pipeline] wrap 00:00:04.401 [Pipeline] { 00:00:04.410 [Pipeline] stage 00:00:04.411 [Pipeline] { (Prologue) 00:00:04.609 [Pipeline] sh 00:00:05.448 + logger -p user.info -t JENKINS-CI 00:00:05.476 [Pipeline] echo 00:00:05.477 Node: WFP3 00:00:05.484 [Pipeline] sh 00:00:05.819 [Pipeline] setCustomBuildProperty 00:00:05.829 [Pipeline] echo 00:00:05.831 Cleanup processes 00:00:05.836 [Pipeline] sh 00:00:06.125 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.125 55028 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.137 [Pipeline] sh 00:00:06.423 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.423 ++ grep -v 'sudo pgrep' 00:00:06.423 ++ awk '{print $1}' 00:00:06.423 + sudo kill -9 00:00:06.423 + true 00:00:06.439 [Pipeline] cleanWs 00:00:06.448 [WS-CLEANUP] Deleting project workspace... 00:00:06.448 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.461 [WS-CLEANUP] done 00:00:06.466 [Pipeline] setCustomBuildProperty 00:00:06.478 [Pipeline] sh 00:00:06.770 + sudo git config --global --replace-all safe.directory '*' 00:00:06.872 [Pipeline] httpRequest 00:00:09.277 [Pipeline] echo 00:00:09.278 Sorcerer 10.211.164.20 is alive 00:00:09.284 [Pipeline] retry 00:00:09.286 [Pipeline] { 00:00:09.295 [Pipeline] httpRequest 00:00:09.300 HttpMethod: GET 00:00:09.301 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.302 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.306 Response Code: HTTP/1.1 200 OK 00:00:09.306 Success: Status code 200 is in the accepted range: 200,404 00:00:09.307 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.656 [Pipeline] } 00:00:10.672 [Pipeline] // retry 00:00:10.678 [Pipeline] sh 00:00:10.967 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.984 [Pipeline] httpRequest 00:00:11.701 [Pipeline] echo 00:00:11.703 Sorcerer 10.211.164.20 is alive 00:00:11.711 [Pipeline] retry 00:00:11.712 [Pipeline] { 00:00:11.725 [Pipeline] httpRequest 00:00:11.729 HttpMethod: GET 00:00:11.730 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:11.731 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:11.746 Response Code: HTTP/1.1 200 OK 00:00:11.746 Success: Status code 200 is in the accepted range: 200,404 00:00:11.746 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:02:47.714 [Pipeline] } 00:02:47.730 [Pipeline] // retry 00:02:47.738 [Pipeline] sh 00:02:48.026 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:02:50.580 [Pipeline] sh 00:02:50.868 + git -C spdk log --oneline -n5 00:02:50.868 b18e1bd62 version: v24.09.1-pre 00:02:50.868 19524ad45 version: v24.09 00:02:50.868 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:02:50.868 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:02:50.868 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:02:50.887 [Pipeline] withCredentials 00:02:50.899 > git --version # timeout=10 00:02:50.914 > git --version # 'git version 2.39.2' 00:02:50.936 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:50.938 [Pipeline] { 00:02:50.948 [Pipeline] retry 00:02:50.950 [Pipeline] { 00:02:50.964 [Pipeline] sh 00:02:51.471 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:58.062 [Pipeline] } 00:02:58.079 [Pipeline] // retry 00:02:58.085 [Pipeline] } 00:02:58.102 [Pipeline] // withCredentials 00:02:58.110 [Pipeline] httpRequest 00:02:58.567 [Pipeline] echo 00:02:58.569 Sorcerer 10.211.164.20 is alive 00:02:58.578 [Pipeline] retry 00:02:58.580 [Pipeline] { 00:02:58.594 [Pipeline] httpRequest 00:02:58.598 HttpMethod: GET 00:02:58.598 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:58.600 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:58.603 Response Code: HTTP/1.1 200 OK 00:02:58.603 Success: Status code 200 is in the accepted range: 200,404 00:02:58.604 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:03.992 [Pipeline] } 00:03:04.010 [Pipeline] // retry 00:03:04.017 [Pipeline] sh 00:03:04.304 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:05.697 [Pipeline] sh 00:03:05.985 + git -C dpdk log --oneline -n5 00:03:05.985 eeb0605f11 version: 23.11.0 00:03:05.985 238778122a doc: update release notes for 23.11 00:03:05.985 46aa6b3cfc doc: fix description of RSS features 00:03:05.985 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:05.985 7e421ae345 devtools: support skipping forbid rule check 00:03:05.995 [Pipeline] } 00:03:06.009 [Pipeline] // stage 00:03:06.017 [Pipeline] stage 00:03:06.019 [Pipeline] { (Prepare) 00:03:06.039 [Pipeline] writeFile 00:03:06.054 [Pipeline] sh 00:03:06.342 + logger -p user.info -t JENKINS-CI 00:03:06.355 [Pipeline] sh 00:03:06.641 + logger -p user.info -t JENKINS-CI 00:03:06.654 [Pipeline] sh 00:03:06.941 + cat autorun-spdk.conf 00:03:06.941 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.941 SPDK_TEST_NVMF=1 00:03:06.941 SPDK_TEST_NVME_CLI=1 00:03:06.941 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:06.941 SPDK_TEST_NVMF_NICS=e810 00:03:06.941 SPDK_TEST_VFIOUSER=1 00:03:06.941 SPDK_RUN_UBSAN=1 00:03:06.941 NET_TYPE=phy 00:03:06.941 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:06.941 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:06.949 RUN_NIGHTLY=1 00:03:06.953 [Pipeline] readFile 00:03:06.986 [Pipeline] withEnv 00:03:06.988 [Pipeline] { 00:03:07.000 [Pipeline] sh 00:03:07.289 + set -ex 00:03:07.289 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:07.289 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:07.289 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.289 ++ SPDK_TEST_NVMF=1 00:03:07.289 ++ SPDK_TEST_NVME_CLI=1 00:03:07.289 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:07.289 ++ SPDK_TEST_NVMF_NICS=e810 00:03:07.289 ++ SPDK_TEST_VFIOUSER=1 00:03:07.289 ++ SPDK_RUN_UBSAN=1 00:03:07.289 ++ NET_TYPE=phy 00:03:07.289 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:07.289 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:07.289 ++ RUN_NIGHTLY=1 00:03:07.289 + case $SPDK_TEST_NVMF_NICS in 00:03:07.289 + DRIVERS=ice 00:03:07.289 + [[ tcp == \r\d\m\a ]] 00:03:07.289 + [[ -n ice ]] 00:03:07.289 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:07.289 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:07.289 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:07.289 rmmod: ERROR: Module i40iw is not currently loaded 00:03:07.289 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:07.289 + true 00:03:07.289 + for D in $DRIVERS 00:03:07.289 + sudo modprobe ice 00:03:07.289 + exit 0 00:03:07.299 [Pipeline] } 00:03:07.315 [Pipeline] // withEnv 00:03:07.320 [Pipeline] } 00:03:07.334 [Pipeline] // stage 00:03:07.344 [Pipeline] catchError 00:03:07.346 [Pipeline] { 00:03:07.360 [Pipeline] timeout 00:03:07.361 Timeout set to expire in 1 hr 0 min 00:03:07.362 [Pipeline] { 00:03:07.376 [Pipeline] stage 00:03:07.379 [Pipeline] { (Tests) 00:03:07.393 [Pipeline] sh 00:03:07.683 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:07.683 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:07.683 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:07.683 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:07.683 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.683 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:07.683 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:07.683 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:07.683 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:07.683 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:07.683 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:07.683 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:07.683 + source /etc/os-release 00:03:07.683 ++ NAME='Fedora Linux' 00:03:07.683 ++ VERSION='39 (Cloud Edition)' 00:03:07.683 ++ ID=fedora 00:03:07.683 ++ VERSION_ID=39 00:03:07.683 ++ VERSION_CODENAME= 00:03:07.683 ++ PLATFORM_ID=platform:f39 00:03:07.683 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:07.683 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:07.683 ++ LOGO=fedora-logo-icon 00:03:07.683 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:07.683 ++ HOME_URL=https://fedoraproject.org/ 00:03:07.683 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:07.683 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:07.683 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:07.683 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:07.683 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:07.683 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:07.683 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:07.683 ++ SUPPORT_END=2024-11-12 00:03:07.683 ++ VARIANT='Cloud Edition' 00:03:07.683 ++ VARIANT_ID=cloud 00:03:07.683 + uname -a 00:03:07.683 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:03:07.683 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:10.224 Hugepages 00:03:10.224 node hugesize free / total 00:03:10.224 node0 1048576kB 0 / 0 00:03:10.224 node0 2048kB 0 / 0 00:03:10.224 node1 1048576kB 0 / 0 00:03:10.224 node1 2048kB 0 / 0 00:03:10.224 00:03:10.224 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.224 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:10.224 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:10.224 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:10.224 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:10.224 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:10.224 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:10.224 + rm -f /tmp/spdk-ld-path 00:03:10.224 + source autorun-spdk.conf 00:03:10.224 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:10.224 ++ SPDK_TEST_NVMF=1 00:03:10.224 ++ SPDK_TEST_NVME_CLI=1 00:03:10.224 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:10.224 ++ SPDK_TEST_NVMF_NICS=e810 00:03:10.224 ++ SPDK_TEST_VFIOUSER=1 00:03:10.224 ++ SPDK_RUN_UBSAN=1 00:03:10.224 ++ NET_TYPE=phy 00:03:10.224 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:10.224 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:10.224 ++ RUN_NIGHTLY=1 00:03:10.224 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:10.224 + [[ -n '' ]] 00:03:10.224 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.224 + for M in /var/spdk/build-*-manifest.txt 00:03:10.224 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:10.224 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:10.224 + for M in /var/spdk/build-*-manifest.txt 00:03:10.224 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:10.224 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:10.484 + for M in /var/spdk/build-*-manifest.txt 00:03:10.484 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:10.484 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:10.484 ++ uname 00:03:10.484 + [[ Linux == \L\i\n\u\x ]] 00:03:10.484 + sudo dmesg -T 00:03:10.484 + sudo dmesg --clear 00:03:10.484 + dmesg_pid=56510 00:03:10.484 + [[ Fedora Linux == FreeBSD ]] 00:03:10.484 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:10.484 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:10.484 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:10.484 + sudo dmesg -Tw 00:03:10.484 + [[ -x /usr/src/fio-static/fio ]] 00:03:10.484 + export FIO_BIN=/usr/src/fio-static/fio 00:03:10.484 + FIO_BIN=/usr/src/fio-static/fio 00:03:10.484 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:10.484 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:10.484 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:10.484 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:10.484 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:10.484 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:10.484 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:10.484 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:10.484 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.484 Test configuration: 00:03:10.484 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:10.484 SPDK_TEST_NVMF=1 00:03:10.484 SPDK_TEST_NVME_CLI=1 00:03:10.484 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:10.484 SPDK_TEST_NVMF_NICS=e810 00:03:10.484 SPDK_TEST_VFIOUSER=1 00:03:10.484 SPDK_RUN_UBSAN=1 00:03:10.484 NET_TYPE=phy 00:03:10.484 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:10.484 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:10.484 RUN_NIGHTLY=1 12:24:36 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:10.484 12:24:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.484 12:24:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:10.484 12:24:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:10.484 12:24:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.484 12:24:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.484 12:24:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.484 12:24:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.484 12:24:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.484 12:24:36 -- paths/export.sh@5 -- $ export PATH 00:03:10.484 12:24:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.484 12:24:36 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:10.484 12:24:36 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:10.484 12:24:36 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734348276.XXXXXX 00:03:10.484 12:24:36 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734348276.oYyWXD 00:03:10.484 12:24:36 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:10.484 12:24:36 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:03:10.484 12:24:36 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:10.484 12:24:36 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:03:10.484 12:24:36 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:10.484 12:24:36 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:10.484 12:24:36 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:10.484 12:24:36 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:10.484 12:24:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.484 12:24:36 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:03:10.484 12:24:36 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:10.484 12:24:36 -- pm/common@17 -- $ local monitor 00:03:10.484 12:24:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.484 12:24:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.484 12:24:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.484 12:24:36 -- pm/common@21 -- $ date +%s 00:03:10.484 12:24:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.484 12:24:36 -- pm/common@21 -- $ date +%s 00:03:10.484 12:24:36 -- pm/common@25 -- $ sleep 1 00:03:10.484 12:24:36 -- pm/common@21 -- $ date +%s 00:03:10.484 12:24:36 -- pm/common@21 -- $ date +%s 00:03:10.484 12:24:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348276 00:03:10.485 12:24:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348276 00:03:10.485 12:24:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348276 00:03:10.485 12:24:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348276 00:03:10.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348276_collect-cpu-load.pm.log 00:03:10.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348276_collect-vmstat.pm.log 00:03:10.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348276_collect-cpu-temp.pm.log 00:03:10.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348276_collect-bmc-pm.bmc.pm.log 00:03:11.682 12:24:37 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:11.682 12:24:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:11.682 12:24:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:11.682 12:24:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.682 12:24:37 -- spdk/autobuild.sh@16 -- $ date -u 00:03:11.682 Mon Dec 16 11:24:37 AM UTC 2024 00:03:11.682 12:24:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:11.682 v24.09-1-gb18e1bd62 00:03:11.682 12:24:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:11.682 12:24:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:11.682 12:24:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:11.682 12:24:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:11.682 12:24:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:11.682 12:24:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.682 ************************************ 00:03:11.682 START TEST ubsan 00:03:11.682 ************************************ 00:03:11.682 12:24:37 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:11.682 using ubsan 00:03:11.682 00:03:11.682 real 0m0.000s 00:03:11.682 user 0m0.000s 00:03:11.682 sys 0m0.000s 00:03:11.682 12:24:37 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:11.682 12:24:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:11.682 ************************************ 00:03:11.682 END TEST ubsan 00:03:11.682 ************************************ 00:03:11.682 12:24:37 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:11.682 12:24:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:11.682 12:24:37 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:11.682 12:24:37 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:11.682 12:24:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:11.682 12:24:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.682 ************************************ 00:03:11.682 START TEST build_native_dpdk 00:03:11.682 ************************************ 00:03:11.682 12:24:37 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:03:11.682 12:24:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:03:11.683 eeb0605f11 version: 23.11.0 00:03:11.683 238778122a doc: update release notes for 23.11 00:03:11.683 46aa6b3cfc doc: fix description of RSS features 00:03:11.683 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:11.683 7e421ae345 devtools: support skipping forbid rule check 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:11.683 patching file config/rte_config.h 00:03:11.683 Hunk #1 succeeded at 60 (offset 1 line). 00:03:11.683 12:24:37 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:11.683 12:24:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:11.942 12:24:37 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:11.942 patching file lib/pcapng/rte_pcapng.c 00:03:11.942 12:24:37 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:11.942 12:24:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:11.942 12:24:37 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:11.943 12:24:37 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:11.943 12:24:37 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:11.943 12:24:37 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:11.943 12:24:37 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:18.512 The Meson build system 00:03:18.512 Version: 1.5.0 00:03:18.512 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:18.512 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:03:18.512 Build type: native build 00:03:18.512 Program cat found: YES (/usr/bin/cat) 00:03:18.512 Project name: DPDK 00:03:18.512 Project version: 23.11.0 00:03:18.512 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:18.512 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:18.512 Host machine cpu family: x86_64 00:03:18.512 Host machine cpu: x86_64 00:03:18.512 Message: ## Building in Developer Mode ## 00:03:18.512 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:18.512 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:03:18.512 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:03:18.512 Program python3 found: YES (/usr/bin/python3) 00:03:18.512 Program cat found: YES (/usr/bin/cat) 00:03:18.512 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:18.512 Compiler for C supports arguments -march=native: YES 00:03:18.512 Checking for size of "void *" : 8 00:03:18.512 Checking for size of "void *" : 8 (cached) 00:03:18.512 Library m found: YES 00:03:18.512 Library numa found: YES 00:03:18.512 Has header "numaif.h" : YES 00:03:18.512 Library fdt found: NO 00:03:18.512 Library execinfo found: NO 00:03:18.512 Has header "execinfo.h" : YES 00:03:18.512 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:18.512 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:18.512 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:18.512 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:18.512 Run-time dependency openssl found: YES 3.1.1 00:03:18.512 Run-time dependency libpcap found: YES 1.10.4 00:03:18.512 Has header "pcap.h" with dependency libpcap: YES 00:03:18.512 Compiler for C supports arguments -Wcast-qual: YES 00:03:18.512 Compiler for C supports arguments -Wdeprecated: YES 00:03:18.512 Compiler for C supports arguments -Wformat: YES 00:03:18.512 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:18.512 Compiler for C supports arguments -Wformat-security: NO 00:03:18.512 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:18.512 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:18.512 Compiler for C supports arguments -Wnested-externs: YES 00:03:18.512 Compiler for C supports arguments -Wold-style-definition: YES 00:03:18.512 Compiler for C supports arguments -Wpointer-arith: YES 00:03:18.512 Compiler for C supports arguments -Wsign-compare: YES 00:03:18.512 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:18.512 Compiler for C supports arguments -Wundef: YES 00:03:18.512 Compiler for C supports arguments -Wwrite-strings: YES 00:03:18.512 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:18.512 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:18.512 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:18.512 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:18.512 Program objdump found: YES (/usr/bin/objdump) 00:03:18.512 Compiler for C supports arguments -mavx512f: YES 00:03:18.512 Checking if "AVX512 checking" compiles: YES 00:03:18.512 Fetching value of define "__SSE4_2__" : 1 00:03:18.512 Fetching value of define "__AES__" : 1 00:03:18.512 Fetching value of define "__AVX__" : 1 00:03:18.512 Fetching value of define "__AVX2__" : 1 00:03:18.512 Fetching value of define "__AVX512BW__" : 1 00:03:18.512 Fetching value of define "__AVX512CD__" : 1 00:03:18.512 Fetching value of define "__AVX512DQ__" : 1 00:03:18.512 Fetching value of define "__AVX512F__" : 1 00:03:18.512 Fetching value of define "__AVX512VL__" : 1 00:03:18.512 Fetching value of define "__PCLMUL__" : 1 00:03:18.512 Fetching value of define "__RDRND__" : 1 00:03:18.512 Fetching value of define "__RDSEED__" : 1 00:03:18.512 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:18.512 Fetching value of define "__znver1__" : (undefined) 00:03:18.512 Fetching value of define "__znver2__" : (undefined) 00:03:18.512 Fetching value of define "__znver3__" : (undefined) 00:03:18.512 Fetching value of define "__znver4__" : (undefined) 00:03:18.512 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:18.512 Message: lib/log: Defining dependency "log" 00:03:18.512 Message: lib/kvargs: Defining dependency "kvargs" 00:03:18.512 Message: lib/telemetry: Defining dependency "telemetry" 00:03:18.512 Checking for function "getentropy" : NO 00:03:18.512 Message: lib/eal: Defining dependency "eal" 00:03:18.512 Message: lib/ring: Defining dependency "ring" 00:03:18.512 Message: lib/rcu: Defining dependency "rcu" 00:03:18.512 Message: lib/mempool: Defining dependency "mempool" 00:03:18.512 Message: lib/mbuf: Defining dependency "mbuf" 00:03:18.512 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:18.512 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:18.512 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:18.512 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:18.512 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:18.512 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:18.513 Compiler for C supports arguments -mpclmul: YES 00:03:18.513 Compiler for C supports arguments -maes: YES 00:03:18.513 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:18.513 Compiler for C supports arguments -mavx512bw: YES 00:03:18.513 Compiler for C supports arguments -mavx512dq: YES 00:03:18.513 Compiler for C supports arguments -mavx512vl: YES 00:03:18.513 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:18.513 Compiler for C supports arguments -mavx2: YES 00:03:18.513 Compiler for C supports arguments -mavx: YES 00:03:18.513 Message: lib/net: Defining dependency "net" 00:03:18.513 Message: lib/meter: Defining dependency "meter" 00:03:18.513 Message: lib/ethdev: Defining dependency "ethdev" 00:03:18.513 Message: lib/pci: Defining dependency "pci" 00:03:18.513 Message: lib/cmdline: Defining dependency "cmdline" 00:03:18.513 Message: lib/metrics: Defining dependency "metrics" 00:03:18.513 Message: lib/hash: Defining dependency "hash" 00:03:18.513 Message: lib/timer: Defining dependency "timer" 00:03:18.513 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:18.513 Message: lib/acl: Defining dependency "acl" 00:03:18.513 Message: lib/bbdev: Defining dependency "bbdev" 00:03:18.513 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:18.513 Run-time dependency libelf found: YES 0.191 00:03:18.513 Message: lib/bpf: Defining dependency "bpf" 00:03:18.513 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:18.513 Message: lib/compressdev: Defining dependency "compressdev" 00:03:18.513 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:18.513 Message: lib/distributor: Defining dependency "distributor" 00:03:18.513 Message: lib/dmadev: Defining dependency "dmadev" 00:03:18.513 Message: lib/efd: Defining dependency "efd" 00:03:18.513 Message: lib/eventdev: Defining dependency "eventdev" 00:03:18.513 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:18.513 Message: lib/gpudev: Defining dependency "gpudev" 00:03:18.513 Message: lib/gro: Defining dependency "gro" 00:03:18.513 Message: lib/gso: Defining dependency "gso" 00:03:18.513 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:18.513 Message: lib/jobstats: Defining dependency "jobstats" 00:03:18.513 Message: lib/latencystats: Defining dependency "latencystats" 00:03:18.513 Message: lib/lpm: Defining dependency "lpm" 00:03:18.513 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:18.513 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:18.513 Message: lib/member: Defining dependency "member" 00:03:18.513 Message: lib/pcapng: Defining dependency "pcapng" 00:03:18.513 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:18.513 Message: lib/power: Defining dependency "power" 00:03:18.513 Message: lib/rawdev: Defining dependency "rawdev" 00:03:18.513 Message: lib/regexdev: Defining dependency "regexdev" 00:03:18.513 Message: lib/mldev: Defining dependency "mldev" 00:03:18.513 Message: lib/rib: Defining dependency "rib" 00:03:18.513 Message: lib/reorder: Defining dependency "reorder" 00:03:18.513 Message: lib/sched: Defining dependency "sched" 00:03:18.513 Message: lib/security: Defining dependency "security" 00:03:18.513 Message: lib/stack: Defining dependency "stack" 00:03:18.513 Has header "linux/userfaultfd.h" : YES 00:03:18.513 Has header "linux/vduse.h" : YES 00:03:18.513 Message: lib/vhost: Defining dependency "vhost" 00:03:18.513 Message: lib/ipsec: Defining dependency "ipsec" 00:03:18.513 Message: lib/pdcp: Defining dependency "pdcp" 00:03:18.513 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:18.513 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:18.513 Message: lib/fib: Defining dependency "fib" 00:03:18.513 Message: lib/port: Defining dependency "port" 00:03:18.513 Message: lib/pdump: Defining dependency "pdump" 00:03:18.513 Message: lib/table: Defining dependency "table" 00:03:18.513 Message: lib/pipeline: Defining dependency "pipeline" 00:03:18.513 Message: lib/graph: Defining dependency "graph" 00:03:18.513 Message: lib/node: Defining dependency "node" 00:03:18.513 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:19.109 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:19.109 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:19.109 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:19.109 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:19.109 Compiler for C supports arguments -Wno-unused-value: YES 00:03:19.109 Compiler for C supports arguments -Wno-format: YES 00:03:19.109 Compiler for C supports arguments -Wno-format-security: YES 00:03:19.109 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:19.109 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:19.109 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:19.109 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:19.109 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:19.109 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:19.109 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:19.109 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:19.109 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:19.109 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:19.109 Has header "sys/epoll.h" : YES 00:03:19.109 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:19.109 Configuring doxy-api-html.conf using configuration 00:03:19.109 Configuring doxy-api-man.conf using configuration 00:03:19.109 Program mandb found: YES (/usr/bin/mandb) 00:03:19.109 Program sphinx-build found: NO 00:03:19.109 Configuring rte_build_config.h using configuration 00:03:19.109 Message: 00:03:19.109 ================= 00:03:19.109 Applications Enabled 00:03:19.109 ================= 00:03:19.109 00:03:19.109 apps: 00:03:19.109 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:19.109 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:19.109 test-pmd, test-regex, test-sad, test-security-perf, 00:03:19.109 00:03:19.109 Message: 00:03:19.109 ================= 00:03:19.109 Libraries Enabled 00:03:19.109 ================= 00:03:19.109 00:03:19.109 libs: 00:03:19.109 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:19.109 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:19.109 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:19.109 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:19.109 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:19.109 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:19.109 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:19.109 00:03:19.109 00:03:19.109 Message: 00:03:19.109 =============== 00:03:19.109 Drivers Enabled 00:03:19.109 =============== 00:03:19.109 00:03:19.109 common: 00:03:19.109 00:03:19.109 bus: 00:03:19.109 pci, vdev, 00:03:19.109 mempool: 00:03:19.109 ring, 00:03:19.109 dma: 00:03:19.109 00:03:19.109 net: 00:03:19.109 i40e, 00:03:19.109 raw: 00:03:19.109 00:03:19.109 crypto: 00:03:19.109 00:03:19.109 compress: 00:03:19.109 00:03:19.109 regex: 00:03:19.109 00:03:19.109 ml: 00:03:19.109 00:03:19.109 vdpa: 00:03:19.109 00:03:19.109 event: 00:03:19.109 00:03:19.109 baseband: 00:03:19.109 00:03:19.109 gpu: 00:03:19.109 00:03:19.109 00:03:19.109 Message: 00:03:19.109 ================= 00:03:19.109 Content Skipped 00:03:19.109 ================= 00:03:19.109 00:03:19.109 apps: 00:03:19.109 00:03:19.109 libs: 00:03:19.109 00:03:19.109 drivers: 00:03:19.109 common/cpt: not in enabled drivers build config 00:03:19.109 common/dpaax: not in enabled drivers build config 00:03:19.109 common/iavf: not in enabled drivers build config 00:03:19.109 common/idpf: not in enabled drivers build config 00:03:19.109 common/mvep: not in enabled drivers build config 00:03:19.109 common/octeontx: not in enabled drivers build config 00:03:19.109 bus/auxiliary: not in enabled drivers build config 00:03:19.109 bus/cdx: not in enabled drivers build config 00:03:19.109 bus/dpaa: not in enabled drivers build config 00:03:19.109 bus/fslmc: not in enabled drivers build config 00:03:19.109 bus/ifpga: not in enabled drivers build config 00:03:19.109 bus/platform: not in enabled drivers build config 00:03:19.109 bus/vmbus: not in enabled drivers build config 00:03:19.109 common/cnxk: not in enabled drivers build config 00:03:19.109 common/mlx5: not in enabled drivers build config 00:03:19.109 common/nfp: not in enabled drivers build config 00:03:19.109 common/qat: not in enabled drivers build config 00:03:19.109 common/sfc_efx: not in enabled drivers build config 00:03:19.109 mempool/bucket: not in enabled drivers build config 00:03:19.109 mempool/cnxk: not in enabled drivers build config 00:03:19.109 mempool/dpaa: not in enabled drivers build config 00:03:19.109 mempool/dpaa2: not in enabled drivers build config 00:03:19.109 mempool/octeontx: not in enabled drivers build config 00:03:19.109 mempool/stack: not in enabled drivers build config 00:03:19.109 dma/cnxk: not in enabled drivers build config 00:03:19.109 dma/dpaa: not in enabled drivers build config 00:03:19.109 dma/dpaa2: not in enabled drivers build config 00:03:19.109 dma/hisilicon: not in enabled drivers build config 00:03:19.109 dma/idxd: not in enabled drivers build config 00:03:19.109 dma/ioat: not in enabled drivers build config 00:03:19.109 dma/skeleton: not in enabled drivers build config 00:03:19.109 net/af_packet: not in enabled drivers build config 00:03:19.109 net/af_xdp: not in enabled drivers build config 00:03:19.109 net/ark: not in enabled drivers build config 00:03:19.109 net/atlantic: not in enabled drivers build config 00:03:19.109 net/avp: not in enabled drivers build config 00:03:19.109 net/axgbe: not in enabled drivers build config 00:03:19.110 net/bnx2x: not in enabled drivers build config 00:03:19.110 net/bnxt: not in enabled drivers build config 00:03:19.110 net/bonding: not in enabled drivers build config 00:03:19.110 net/cnxk: not in enabled drivers build config 00:03:19.110 net/cpfl: not in enabled drivers build config 00:03:19.110 net/cxgbe: not in enabled drivers build config 00:03:19.110 net/dpaa: not in enabled drivers build config 00:03:19.110 net/dpaa2: not in enabled drivers build config 00:03:19.110 net/e1000: not in enabled drivers build config 00:03:19.110 net/ena: not in enabled drivers build config 00:03:19.110 net/enetc: not in enabled drivers build config 00:03:19.110 net/enetfec: not in enabled drivers build config 00:03:19.110 net/enic: not in enabled drivers build config 00:03:19.110 net/failsafe: not in enabled drivers build config 00:03:19.110 net/fm10k: not in enabled drivers build config 00:03:19.110 net/gve: not in enabled drivers build config 00:03:19.110 net/hinic: not in enabled drivers build config 00:03:19.110 net/hns3: not in enabled drivers build config 00:03:19.110 net/iavf: not in enabled drivers build config 00:03:19.110 net/ice: not in enabled drivers build config 00:03:19.110 net/idpf: not in enabled drivers build config 00:03:19.110 net/igc: not in enabled drivers build config 00:03:19.110 net/ionic: not in enabled drivers build config 00:03:19.110 net/ipn3ke: not in enabled drivers build config 00:03:19.110 net/ixgbe: not in enabled drivers build config 00:03:19.110 net/mana: not in enabled drivers build config 00:03:19.110 net/memif: not in enabled drivers build config 00:03:19.110 net/mlx4: not in enabled drivers build config 00:03:19.110 net/mlx5: not in enabled drivers build config 00:03:19.110 net/mvneta: not in enabled drivers build config 00:03:19.110 net/mvpp2: not in enabled drivers build config 00:03:19.110 net/netvsc: not in enabled drivers build config 00:03:19.110 net/nfb: not in enabled drivers build config 00:03:19.110 net/nfp: not in enabled drivers build config 00:03:19.110 net/ngbe: not in enabled drivers build config 00:03:19.110 net/null: not in enabled drivers build config 00:03:19.110 net/octeontx: not in enabled drivers build config 00:03:19.110 net/octeon_ep: not in enabled drivers build config 00:03:19.110 net/pcap: not in enabled drivers build config 00:03:19.110 net/pfe: not in enabled drivers build config 00:03:19.110 net/qede: not in enabled drivers build config 00:03:19.110 net/ring: not in enabled drivers build config 00:03:19.110 net/sfc: not in enabled drivers build config 00:03:19.110 net/softnic: not in enabled drivers build config 00:03:19.110 net/tap: not in enabled drivers build config 00:03:19.110 net/thunderx: not in enabled drivers build config 00:03:19.110 net/txgbe: not in enabled drivers build config 00:03:19.110 net/vdev_netvsc: not in enabled drivers build config 00:03:19.110 net/vhost: not in enabled drivers build config 00:03:19.110 net/virtio: not in enabled drivers build config 00:03:19.110 net/vmxnet3: not in enabled drivers build config 00:03:19.110 raw/cnxk_bphy: not in enabled drivers build config 00:03:19.110 raw/cnxk_gpio: not in enabled drivers build config 00:03:19.110 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:19.110 raw/ifpga: not in enabled drivers build config 00:03:19.110 raw/ntb: not in enabled drivers build config 00:03:19.110 raw/skeleton: not in enabled drivers build config 00:03:19.110 crypto/armv8: not in enabled drivers build config 00:03:19.110 crypto/bcmfs: not in enabled drivers build config 00:03:19.110 crypto/caam_jr: not in enabled drivers build config 00:03:19.110 crypto/ccp: not in enabled drivers build config 00:03:19.110 crypto/cnxk: not in enabled drivers build config 00:03:19.110 crypto/dpaa_sec: not in enabled drivers build config 00:03:19.110 crypto/dpaa2_sec: not in enabled drivers build config 00:03:19.110 crypto/ipsec_mb: not in enabled drivers build config 00:03:19.110 crypto/mlx5: not in enabled drivers build config 00:03:19.110 crypto/mvsam: not in enabled drivers build config 00:03:19.110 crypto/nitrox: not in enabled drivers build config 00:03:19.110 crypto/null: not in enabled drivers build config 00:03:19.110 crypto/octeontx: not in enabled drivers build config 00:03:19.110 crypto/openssl: not in enabled drivers build config 00:03:19.110 crypto/scheduler: not in enabled drivers build config 00:03:19.110 crypto/uadk: not in enabled drivers build config 00:03:19.110 crypto/virtio: not in enabled drivers build config 00:03:19.110 compress/isal: not in enabled drivers build config 00:03:19.110 compress/mlx5: not in enabled drivers build config 00:03:19.110 compress/octeontx: not in enabled drivers build config 00:03:19.110 compress/zlib: not in enabled drivers build config 00:03:19.110 regex/mlx5: not in enabled drivers build config 00:03:19.110 regex/cn9k: not in enabled drivers build config 00:03:19.110 ml/cnxk: not in enabled drivers build config 00:03:19.110 vdpa/ifc: not in enabled drivers build config 00:03:19.110 vdpa/mlx5: not in enabled drivers build config 00:03:19.110 vdpa/nfp: not in enabled drivers build config 00:03:19.110 vdpa/sfc: not in enabled drivers build config 00:03:19.110 event/cnxk: not in enabled drivers build config 00:03:19.110 event/dlb2: not in enabled drivers build config 00:03:19.110 event/dpaa: not in enabled drivers build config 00:03:19.110 event/dpaa2: not in enabled drivers build config 00:03:19.110 event/dsw: not in enabled drivers build config 00:03:19.110 event/opdl: not in enabled drivers build config 00:03:19.110 event/skeleton: not in enabled drivers build config 00:03:19.110 event/sw: not in enabled drivers build config 00:03:19.110 event/octeontx: not in enabled drivers build config 00:03:19.110 baseband/acc: not in enabled drivers build config 00:03:19.110 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:19.110 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:19.110 baseband/la12xx: not in enabled drivers build config 00:03:19.110 baseband/null: not in enabled drivers build config 00:03:19.110 baseband/turbo_sw: not in enabled drivers build config 00:03:19.110 gpu/cuda: not in enabled drivers build config 00:03:19.110 00:03:19.110 00:03:19.110 Build targets in project: 217 00:03:19.110 00:03:19.110 DPDK 23.11.0 00:03:19.110 00:03:19.110 User defined options 00:03:19.110 libdir : lib 00:03:19.110 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:19.110 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:19.110 c_link_args : 00:03:19.110 enable_docs : false 00:03:19.110 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:19.110 enable_kmods : false 00:03:19.110 machine : native 00:03:19.110 tests : false 00:03:19.110 00:03:19.110 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:19.110 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:19.110 12:24:45 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:03:19.375 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:19.375 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:19.375 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:19.375 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:19.375 [4/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:19.375 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:19.375 [6/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:19.375 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:19.375 [8/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:19.375 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:19.375 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:19.375 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:19.642 [12/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:19.642 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:19.642 [14/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:19.642 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:19.642 [16/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:19.642 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:19.642 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:19.642 [19/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:19.642 [20/707] Linking static target lib/librte_kvargs.a 00:03:19.642 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:19.642 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:19.642 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:19.642 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:19.642 [25/707] Linking static target lib/librte_pci.a 00:03:19.642 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:19.642 [27/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:19.642 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:19.642 [29/707] Linking static target lib/librte_log.a 00:03:19.642 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:19.642 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:19.642 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:19.642 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:19.916 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:19.916 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:19.916 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:19.916 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.916 [38/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:20.185 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:20.185 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:20.185 [41/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:20.185 [42/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:20.185 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:20.185 [44/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:20.185 [45/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:20.185 [46/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:20.185 [47/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.185 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:20.185 [49/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:20.185 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:20.185 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:20.185 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:20.185 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:20.185 [54/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:20.185 [55/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:20.185 [56/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:20.185 [57/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:20.185 [58/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:20.185 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:20.185 [60/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:20.185 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:20.185 [62/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:20.185 [63/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:20.185 [64/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:20.185 [65/707] Linking static target lib/librte_ring.a 00:03:20.185 [66/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:20.185 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:20.185 [68/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:20.185 [69/707] Linking static target lib/librte_meter.a 00:03:20.185 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:20.185 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:20.185 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:20.185 [73/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:20.185 [74/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.185 [75/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:20.185 [76/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:20.185 [77/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:20.185 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:20.185 [79/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:20.186 [80/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:20.186 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:20.186 [82/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:20.186 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:20.457 [84/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:20.457 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:20.457 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:20.457 [87/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:20.457 [88/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:20.457 [89/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:20.457 [90/707] Linking static target lib/librte_cmdline.a 00:03:20.457 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:20.457 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:20.457 [93/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:20.457 [94/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:20.457 [95/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:20.457 [96/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:20.457 [97/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:20.457 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:20.457 [99/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:20.457 [100/707] Linking static target lib/librte_net.a 00:03:20.457 [101/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:20.457 [102/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:20.457 [103/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.457 [104/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.457 [105/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:20.457 [106/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:20.457 [107/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:20.457 [108/707] Linking static target lib/librte_metrics.a 00:03:20.457 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:20.457 [110/707] Linking target lib/librte_log.so.24.0 00:03:20.457 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:20.457 [112/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:20.722 [113/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:20.722 [114/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:20.722 [115/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:20.722 [116/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:20.722 [117/707] Linking static target lib/librte_cfgfile.a 00:03:20.722 [118/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:20.722 [119/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.722 [120/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.722 [121/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:20.722 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:20.722 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:20.722 [124/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:20.722 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:20.722 [126/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:20.722 [127/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:20.722 [128/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.722 [129/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:20.722 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:20.722 [131/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:20.722 [132/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:20.722 [133/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:20.722 [134/707] Linking static target lib/librte_mempool.a 00:03:20.996 [135/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.996 [136/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:20.996 [137/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.996 [138/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:20.996 [139/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:20.996 [140/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:20.996 [141/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:20.996 [142/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:20.996 [143/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:20.996 [144/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:20.996 [145/707] Linking static target lib/librte_bitratestats.a 00:03:20.996 [146/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:20.996 [147/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:20.996 [148/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:20.996 [149/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:20.996 [150/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:20.996 [151/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:20.996 [152/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.996 [153/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:20.996 [154/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:20.996 [155/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:20.996 [156/707] Linking static target lib/librte_timer.a 00:03:20.996 [157/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:20.996 [158/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:21.264 [159/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:21.264 [160/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.264 [161/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:21.264 [162/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:21.264 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:21.264 [164/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:21.264 [165/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:21.264 [166/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:21.264 [167/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:21.264 [168/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:21.264 [169/707] Linking static target lib/librte_compressdev.a 00:03:21.264 [170/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:21.264 [171/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.264 [172/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:21.264 [173/707] Linking static target lib/librte_rcu.a 00:03:21.264 [174/707] Linking static target lib/librte_jobstats.a 00:03:21.264 [175/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:21.264 [176/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:21.264 [177/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.264 [178/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:21.264 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:21.264 [180/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:21.264 [181/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:21.264 [182/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:21.534 [183/707] Linking static target lib/librte_dispatcher.a 00:03:21.534 [184/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:21.534 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:21.534 [186/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:21.534 [187/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:21.534 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:21.534 [189/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:21.534 [190/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:21.534 [191/707] Linking static target lib/librte_dmadev.a 00:03:21.534 [192/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:21.535 [193/707] Linking static target lib/librte_bbdev.a 00:03:21.535 [194/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:21.535 [195/707] Linking static target lib/librte_gso.a 00:03:21.535 [196/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:21.535 [197/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:21.535 [198/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:21.535 [199/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:21.535 [200/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:21.535 [201/707] Linking static target lib/librte_mbuf.a 00:03:21.535 [202/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:21.535 [203/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:21.535 [204/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:21.535 [205/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:21.535 [206/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:21.535 [207/707] Linking static target lib/librte_gpudev.a 00:03:21.535 [208/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:21.535 [209/707] Linking static target lib/librte_gro.a 00:03:21.535 [210/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:21.535 [211/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:21.535 [212/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:21.535 [213/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:21.535 [214/707] Linking static target lib/librte_latencystats.a 00:03:21.535 [215/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:21.535 [216/707] Linking static target lib/librte_distributor.a 00:03:21.805 [217/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:21.805 [218/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:21.805 [219/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.805 [220/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:21.805 [221/707] Linking static target lib/librte_telemetry.a 00:03:21.805 [222/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:21.805 [223/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:21.805 [224/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:21.805 [225/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.805 [226/707] Linking static target lib/librte_eal.a 00:03:21.805 [227/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:21.805 [228/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:21.805 [229/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:21.805 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:21.805 [231/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.805 [232/707] Linking static target lib/librte_stack.a 00:03:21.805 [233/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.805 [234/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:21.805 [235/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:21.805 [236/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.805 [237/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:21.805 [238/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:21.805 [239/707] Linking static target lib/librte_regexdev.a 00:03:21.805 [240/707] Linking static target lib/librte_ip_frag.a 00:03:21.805 [241/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:21.805 [242/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:21.805 [243/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:21.805 [244/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.805 [245/707] Linking static target lib/librte_rawdev.a 00:03:22.069 [246/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:22.069 [247/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:22.069 [248/707] Linking static target lib/librte_mldev.a 00:03:22.069 [249/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:22.069 [250/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:22.069 [251/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:22.069 [252/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.069 [253/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:22.069 [254/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:22.069 [255/707] Linking static target lib/librte_reorder.a 00:03:22.069 [256/707] Linking static target lib/librte_power.a 00:03:22.069 [257/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:22.069 [258/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:22.069 [259/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.069 [260/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.069 [261/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.069 [262/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.069 [263/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:22.069 [264/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.070 [265/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:22.070 [266/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.340 [267/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:22.340 [268/707] Linking static target lib/librte_bpf.a 00:03:22.340 [269/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:22.340 [270/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:22.340 [271/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:22.340 [272/707] Linking static target lib/librte_pcapng.a 00:03:22.340 [273/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:22.340 [274/707] Linking static target lib/librte_security.a 00:03:22.340 [275/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:22.340 [276/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:22.340 [277/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:22.340 [278/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.340 [279/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:22.340 [280/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:22.340 [281/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:22.340 [282/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:22.340 [283/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.340 [284/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:22.340 [285/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:22.340 [286/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:22.340 [287/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:22.340 [288/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:22.340 [289/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.340 [290/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:22.340 [291/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:22.340 [292/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.604 [293/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:22.604 [294/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:22.604 [295/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:22.604 [296/707] Linking static target lib/librte_efd.a 00:03:22.604 [297/707] Linking static target lib/librte_rib.a 00:03:22.604 [298/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.604 [299/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:22.604 [300/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:22.604 [301/707] Linking static target lib/librte_lpm.a 00:03:22.604 [302/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:22.604 [303/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:22.604 [304/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.604 [305/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:22.604 [306/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.604 [307/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.604 [308/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:22.604 [309/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:22.604 [310/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.871 [311/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:22.871 [312/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:22.871 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:22.871 [314/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:22.871 [315/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:22.871 [316/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:22.871 [317/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:22.871 [318/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:22.871 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:22.871 [320/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:22.871 [321/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.871 [322/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:22.871 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:22.871 [324/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:22.871 [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:22.871 [326/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:22.871 [327/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:23.135 [328/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:23.135 [329/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:23.135 [330/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:23.135 [331/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [332/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:23.135 [333/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:23.135 [334/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:23.135 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.135 [336/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:23.135 [338/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:23.135 [339/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [340/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:23.135 [341/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [342/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:23.135 [343/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:23.135 [344/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:23.135 [345/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:23.135 [346/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:23.135 [347/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [348/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [349/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:23.135 [350/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:23.135 [351/707] Linking static target lib/librte_fib.a 00:03:23.407 [352/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:23.407 [353/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:23.407 [354/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:23.407 [355/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:23.407 [356/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:23.407 [357/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:23.407 [358/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:23.407 [359/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:23.407 [360/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:23.407 [361/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:23.407 [362/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:23.679 [363/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:23.679 [364/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:23.679 [365/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:23.679 [366/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:23.679 [367/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:23.679 [368/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:23.679 [369/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:23.679 [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:23.679 [371/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:23.679 [372/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:23.679 [373/707] Linking static target lib/librte_graph.a 00:03:23.679 [374/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:23.679 [375/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:23.679 [376/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:23.679 [377/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:23.679 [378/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:23.679 [379/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:23.953 [380/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:23.954 [381/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:23.954 [382/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:23.954 [383/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.954 [384/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:23.954 [385/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:23.954 [386/707] Linking static target lib/librte_pdump.a 00:03:23.954 [387/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:23.954 [388/707] Linking target lib/librte_kvargs.so.24.0 00:03:23.954 [389/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:23.954 [390/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:23.954 [391/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:23.954 [392/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:23.954 [393/707] Linking target lib/librte_telemetry.so.24.0 00:03:23.954 [394/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:23.954 [395/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:23.954 [396/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:23.954 [397/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:23.954 [398/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:23.954 [399/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:23.954 [400/707] Linking static target lib/librte_table.a 00:03:23.954 [401/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:23.954 [402/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:23.954 [403/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:23.954 [404/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:24.228 [405/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:24.228 [406/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:24.228 [407/707] Linking static target lib/librte_sched.a 00:03:24.228 [408/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:24.228 [409/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:24.228 [410/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:24.228 [411/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:24.228 [412/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:24.228 [413/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:24.228 [414/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:24.228 [415/707] Linking static target lib/librte_member.a 00:03:24.228 [416/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:24.228 [417/707] Linking static target lib/librte_cryptodev.a 00:03:24.228 [418/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.228 [419/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.228 [420/707] Linking static target drivers/librte_bus_vdev.a 00:03:24.228 [421/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:24.228 [422/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:24.228 [423/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:24.228 [424/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.228 [425/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:24.228 [426/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:24.228 [427/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:24.228 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:24.228 [429/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:24.500 [430/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:24.500 [431/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.500 [432/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:24.500 [433/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:24.500 [434/707] Linking static target lib/librte_hash.a 00:03:24.500 [435/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:24.500 [436/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:24.500 [437/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:24.500 [438/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:24.500 [439/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:24.500 [440/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:24.500 [441/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:24.500 [442/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:24.500 [443/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:24.500 [444/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:24.500 [445/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.500 [446/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:24.500 [447/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:24.500 [448/707] Linking static target drivers/librte_bus_pci.a 00:03:24.500 [449/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.772 [450/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:24.772 [451/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:24.772 [452/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:24.772 [453/707] Linking static target lib/librte_ipsec.a 00:03:24.772 [454/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:24.772 [455/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.772 [456/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:24.772 [457/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:24.772 [458/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.772 [459/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.772 [460/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:24.772 [461/707] Linking static target lib/librte_eventdev.a 00:03:24.772 [462/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:24.772 [463/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:24.772 [464/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:24.772 [465/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:24.772 [466/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.772 [467/707] Linking static target lib/librte_port.a 00:03:24.772 [468/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:24.772 [469/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:25.037 [470/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:25.037 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:25.037 [472/707] Linking static target drivers/librte_mempool_ring.a 00:03:25.037 [473/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:25.037 [474/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:25.037 [475/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:25.037 [476/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:25.037 [477/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:25.037 [478/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:25.037 [479/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:25.037 [480/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:25.037 [481/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:25.037 [482/707] Linking static target lib/librte_pdcp.a 00:03:25.037 [483/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:25.037 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:25.037 [485/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:25.037 [486/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:25.037 [487/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:25.037 [488/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:25.037 [489/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:25.037 [490/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:25.037 [491/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.037 [492/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:25.037 [493/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:25.037 [494/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:25.296 [495/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:25.296 [496/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:25.296 [497/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:25.296 [498/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:25.296 [499/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.296 [500/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:25.296 [501/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:25.296 [502/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:25.296 [503/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:25.296 [504/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:25.296 [505/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.296 [506/707] Linking static target lib/librte_node.a 00:03:25.296 [507/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:25.296 [508/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:25.296 [509/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:25.296 [510/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:25.296 [511/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:25.296 [512/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:25.296 [513/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.296 [514/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:25.296 [515/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:25.296 [516/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:25.296 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:25.296 [518/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.296 [519/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.555 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:25.555 [521/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:25.555 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:25.555 [523/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:25.555 [524/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:25.555 [525/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.555 [526/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:25.555 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:25.555 [528/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.555 [529/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:25.555 [530/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:25.555 [531/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:25.555 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:25.555 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:25.815 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:25.815 [535/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.815 [536/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:25.815 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:25.815 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:25.815 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:25.815 [540/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:25.815 [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:25.815 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:25.815 [543/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:25.815 [544/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:25.815 [545/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:25.815 [546/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:25.815 [547/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:25.815 [548/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:25.815 [549/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:25.815 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:25.815 [551/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:25.815 [552/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:25.815 [553/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:25.815 [554/707] Linking static target lib/acl/libavx2_tmp.a 00:03:26.073 [555/707] Linking static target lib/librte_acl.a 00:03:26.073 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:26.073 [557/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:26.073 [558/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.073 [559/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:26.073 [560/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:26.073 [561/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:26.073 [562/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:26.073 [563/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:26.074 [564/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:26.074 [565/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:26.332 [566/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:26.332 [567/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:26.332 [568/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:26.332 [569/707] Linking static target lib/librte_ethdev.a 00:03:26.332 [570/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.332 [571/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.620 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:26.620 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:26.879 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:26.880 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:27.138 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:27.397 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:27.397 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:27.656 [579/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:27.656 [580/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:27.916 [581/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.484 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:28.484 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:28.484 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:28.743 [585/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:28.743 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:28.743 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:28.743 [588/707] Linking static target drivers/librte_net_i40e.a 00:03:29.002 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:29.574 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.832 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:30.401 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:31.779 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.038 [594/707] Linking target lib/librte_eal.so.24.0 00:03:32.038 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:32.038 [596/707] Linking target lib/librte_ring.so.24.0 00:03:32.038 [597/707] Linking target lib/librte_meter.so.24.0 00:03:32.038 [598/707] Linking target lib/librte_pci.so.24.0 00:03:32.038 [599/707] Linking target lib/librte_timer.so.24.0 00:03:32.038 [600/707] Linking target lib/librte_cfgfile.so.24.0 00:03:32.038 [601/707] Linking target lib/librte_jobstats.so.24.0 00:03:32.038 [602/707] Linking target lib/librte_dmadev.so.24.0 00:03:32.038 [603/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:32.038 [604/707] Linking target lib/librte_stack.so.24.0 00:03:32.038 [605/707] Linking target lib/librte_rawdev.so.24.0 00:03:32.038 [606/707] Linking target lib/librte_acl.so.24.0 00:03:32.297 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:32.297 [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:32.297 [609/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:32.297 [610/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:32.297 [611/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:32.297 [612/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:32.297 [613/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:32.297 [614/707] Linking target lib/librte_rcu.so.24.0 00:03:32.297 [615/707] Linking target lib/librte_mempool.so.24.0 00:03:32.297 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:32.297 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:32.297 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:32.556 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:32.556 [620/707] Linking target lib/librte_rib.so.24.0 00:03:32.556 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:32.556 [622/707] Linking target lib/librte_mbuf.so.24.0 00:03:32.556 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:32.556 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:32.556 [625/707] Linking target lib/librte_fib.so.24.0 00:03:32.556 [626/707] Linking target lib/librte_bbdev.so.24.0 00:03:32.556 [627/707] Linking target lib/librte_compressdev.so.24.0 00:03:32.556 [628/707] Linking target lib/librte_net.so.24.0 00:03:32.556 [629/707] Linking target lib/librte_regexdev.so.24.0 00:03:32.556 [630/707] Linking target lib/librte_reorder.so.24.0 00:03:32.556 [631/707] Linking target lib/librte_mldev.so.24.0 00:03:32.556 [632/707] Linking target lib/librte_sched.so.24.0 00:03:32.556 [633/707] Linking target lib/librte_gpudev.so.24.0 00:03:32.556 [634/707] Linking target lib/librte_cryptodev.so.24.0 00:03:32.556 [635/707] Linking target lib/librte_distributor.so.24.0 00:03:32.814 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:32.814 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:32.814 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:32.814 [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:32.814 [640/707] Linking target lib/librte_cmdline.so.24.0 00:03:32.814 [641/707] Linking target lib/librte_hash.so.24.0 00:03:32.814 [642/707] Linking target lib/librte_security.so.24.0 00:03:33.073 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:33.073 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:33.073 [645/707] Linking target lib/librte_efd.so.24.0 00:03:33.073 [646/707] Linking target lib/librte_lpm.so.24.0 00:03:33.073 [647/707] Linking target lib/librte_member.so.24.0 00:03:33.073 [648/707] Linking target lib/librte_ipsec.so.24.0 00:03:33.073 [649/707] Linking target lib/librte_pdcp.so.24.0 00:03:33.073 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:33.073 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:33.642 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.642 [653/707] Linking target lib/librte_ethdev.so.24.0 00:03:33.901 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:33.901 [655/707] Linking target lib/librte_metrics.so.24.0 00:03:33.901 [656/707] Linking target lib/librte_ip_frag.so.24.0 00:03:33.901 [657/707] Linking target lib/librte_pcapng.so.24.0 00:03:33.901 [658/707] Linking target lib/librte_gro.so.24.0 00:03:33.901 [659/707] Linking target lib/librte_gso.so.24.0 00:03:33.901 [660/707] Linking target lib/librte_power.so.24.0 00:03:33.902 [661/707] Linking target lib/librte_bpf.so.24.0 00:03:33.902 [662/707] Linking target lib/librte_eventdev.so.24.0 00:03:33.902 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:34.160 [664/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:34.160 [665/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:34.160 [666/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:34.160 [667/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:34.160 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:34.160 [669/707] Linking target lib/librte_bitratestats.so.24.0 00:03:34.160 [670/707] Linking target lib/librte_graph.so.24.0 00:03:34.160 [671/707] Linking target lib/librte_latencystats.so.24.0 00:03:34.160 [672/707] Linking target lib/librte_pdump.so.24.0 00:03:34.160 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:03:34.160 [674/707] Linking target lib/librte_port.so.24.0 00:03:34.160 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:34.160 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:34.160 [677/707] Linking target lib/librte_node.so.24.0 00:03:34.419 [678/707] Linking target lib/librte_table.so.24.0 00:03:34.419 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:36.327 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:36.327 [681/707] Linking static target lib/librte_pipeline.a 00:03:37.265 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:37.265 [683/707] Linking static target lib/librte_vhost.a 00:03:37.524 [684/707] Linking target app/dpdk-test-sad 00:03:37.524 [685/707] Linking target app/dpdk-pdump 00:03:37.524 [686/707] Linking target app/dpdk-test-flow-perf 00:03:37.524 [687/707] Linking target app/dpdk-test-acl 00:03:37.524 [688/707] Linking target app/dpdk-test-cmdline 00:03:37.524 [689/707] Linking target app/dpdk-dumpcap 00:03:37.524 [690/707] Linking target app/dpdk-test-fib 00:03:37.524 [691/707] Linking target app/dpdk-proc-info 00:03:37.524 [692/707] Linking target app/dpdk-graph 00:03:37.524 [693/707] Linking target app/dpdk-test-dma-perf 00:03:37.783 [694/707] Linking target app/dpdk-test-pipeline 00:03:37.783 [695/707] Linking target app/dpdk-test-compress-perf 00:03:37.784 [696/707] Linking target app/dpdk-test-gpudev 00:03:37.784 [697/707] Linking target app/dpdk-test-eventdev 00:03:37.784 [698/707] Linking target app/dpdk-test-mldev 00:03:37.784 [699/707] Linking target app/dpdk-test-regex 00:03:37.784 [700/707] Linking target app/dpdk-test-security-perf 00:03:37.784 [701/707] Linking target app/dpdk-test-bbdev 00:03:37.784 [702/707] Linking target app/dpdk-test-crypto-perf 00:03:37.784 [703/707] Linking target app/dpdk-testpmd 00:03:39.163 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.163 [705/707] Linking target lib/librte_vhost.so.24.0 00:03:41.072 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.072 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:41.332 12:25:07 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:41.332 12:25:07 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:41.332 12:25:07 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:03:41.332 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:41.332 [0/1] Installing files. 00:03:41.596 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:41.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:41.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:41.602 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.602 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.603 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:41.867 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:41.867 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:41.867 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.867 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:41.867 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.868 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.869 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.870 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:41.871 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:41.871 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:41.871 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:41.871 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:41.871 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:41.871 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:41.871 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:41.871 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:41.871 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:41.871 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:41.871 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:41.871 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:41.871 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:41.872 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:41.872 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:41.872 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:41.872 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:41.872 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:41.872 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:41.872 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:41.872 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:41.872 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:41.872 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:41.872 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:41.872 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:41.872 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:41.872 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:41.872 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:41.872 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:41.872 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:41.872 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:41.872 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:41.872 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:41.872 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:41.872 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:41.872 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:41.872 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:41.872 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:41.872 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:41.872 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:41.872 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:41.872 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:41.872 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:41.872 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:41.872 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:41.872 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:41.872 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:41.872 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:41.872 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:41.872 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:41.872 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:41.872 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:41.872 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:41.872 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:41.872 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:41.872 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:41.872 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:41.872 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:41.872 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:41.872 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:41.872 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:41.872 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:41.872 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:41.872 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:41.872 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:41.872 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:41.872 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:41.872 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:41.872 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:41.872 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:41.872 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:41.872 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:41.872 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:41.872 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:41.872 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:41.872 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:41.872 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:41.872 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:41.872 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:41.872 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:41.872 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:41.872 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:41.872 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:41.872 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:41.872 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:41.872 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:41.872 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:41.872 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:41.872 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:41.872 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:41.872 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:41.872 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:41.872 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:41.872 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:41.872 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:41.872 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:41.872 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:41.872 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:41.872 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:41.872 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:41.872 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:41.872 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:41.872 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:41.872 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:41.872 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:41.872 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:41.872 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:41.872 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:41.872 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:41.872 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:41.872 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:41.872 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:41.873 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:41.873 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:41.873 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:41.873 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:41.873 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:41.873 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:41.873 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:41.873 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:41.873 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:41.873 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:41.873 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:41.873 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:41.873 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:41.873 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:41.873 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:41.873 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:41.873 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:41.873 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:41.873 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:41.873 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:41.873 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:41.873 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:41.873 12:25:07 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:41.873 12:25:07 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.873 00:03:41.873 real 0m30.258s 00:03:41.873 user 9m21.881s 00:03:41.873 sys 2m7.890s 00:03:41.873 12:25:07 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:41.873 12:25:07 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:41.873 ************************************ 00:03:41.873 END TEST build_native_dpdk 00:03:41.873 ************************************ 00:03:42.133 12:25:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:42.133 12:25:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:42.133 12:25:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:42.133 12:25:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:42.133 12:25:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:42.133 12:25:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:42.133 12:25:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:42.133 12:25:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:42.133 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:42.392 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:42.392 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:42.392 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:42.652 Using 'verbs' RDMA provider 00:03:55.806 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:08.024 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:08.024 Creating mk/config.mk...done. 00:04:08.024 Creating mk/cc.flags.mk...done. 00:04:08.024 Type 'make' to build. 00:04:08.024 12:25:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:04:08.024 12:25:34 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:08.024 12:25:34 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:08.024 12:25:34 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.283 ************************************ 00:04:08.283 START TEST make 00:04:08.283 ************************************ 00:04:08.283 12:25:34 make -- common/autotest_common.sh@1125 -- $ make -j96 00:04:08.543 make[1]: Nothing to be done for 'all'. 00:04:09.941 The Meson build system 00:04:09.941 Version: 1.5.0 00:04:09.941 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:09.941 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:09.941 Build type: native build 00:04:09.941 Project name: libvfio-user 00:04:09.941 Project version: 0.0.1 00:04:09.941 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:09.941 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:09.941 Host machine cpu family: x86_64 00:04:09.941 Host machine cpu: x86_64 00:04:09.941 Run-time dependency threads found: YES 00:04:09.941 Library dl found: YES 00:04:09.941 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:09.941 Run-time dependency json-c found: YES 0.17 00:04:09.941 Run-time dependency cmocka found: YES 1.1.7 00:04:09.941 Program pytest-3 found: NO 00:04:09.941 Program flake8 found: NO 00:04:09.941 Program misspell-fixer found: NO 00:04:09.941 Program restructuredtext-lint found: NO 00:04:09.941 Program valgrind found: YES (/usr/bin/valgrind) 00:04:09.941 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:09.941 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:09.941 Compiler for C supports arguments -Wwrite-strings: YES 00:04:09.941 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:09.941 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:09.941 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:09.941 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:09.941 Build targets in project: 8 00:04:09.941 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:09.941 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:09.941 00:04:09.941 libvfio-user 0.0.1 00:04:09.941 00:04:09.941 User defined options 00:04:09.941 buildtype : debug 00:04:09.941 default_library: shared 00:04:09.941 libdir : /usr/local/lib 00:04:09.941 00:04:09.941 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:10.509 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:10.509 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:10.509 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:10.509 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:10.509 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:10.767 [5/37] Compiling C object samples/null.p/null.c.o 00:04:10.767 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:10.767 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:10.767 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:10.767 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:10.767 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:10.767 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:10.767 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:10.767 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:10.767 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:10.767 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:10.767 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:10.767 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:10.767 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:10.767 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:10.767 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:10.767 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:10.767 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:10.767 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:10.767 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:10.767 [25/37] Compiling C object samples/server.p/server.c.o 00:04:10.767 [26/37] Compiling C object samples/client.p/client.c.o 00:04:10.767 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:10.767 [28/37] Linking target samples/client 00:04:10.767 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:10.767 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:11.025 [31/37] Linking target test/unit_tests 00:04:11.025 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:11.025 [33/37] Linking target samples/lspci 00:04:11.025 [34/37] Linking target samples/server 00:04:11.025 [35/37] Linking target samples/gpio-pci-idio-16 00:04:11.025 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:11.025 [37/37] Linking target samples/null 00:04:11.025 INFO: autodetecting backend as ninja 00:04:11.025 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.025 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.592 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:11.592 ninja: no work to do. 00:04:38.150 CC lib/ut_mock/mock.o 00:04:38.150 CC lib/log/log.o 00:04:38.150 CC lib/log/log_flags.o 00:04:38.150 CC lib/log/log_deprecated.o 00:04:38.150 CC lib/ut/ut.o 00:04:38.150 LIB libspdk_ut.a 00:04:38.150 LIB libspdk_ut_mock.a 00:04:38.150 LIB libspdk_log.a 00:04:38.150 SO libspdk_ut.so.2.0 00:04:38.150 SO libspdk_ut_mock.so.6.0 00:04:38.150 SO libspdk_log.so.7.0 00:04:38.409 SYMLINK libspdk_ut_mock.so 00:04:38.409 SYMLINK libspdk_ut.so 00:04:38.409 SYMLINK libspdk_log.so 00:04:38.668 CC lib/util/base64.o 00:04:38.668 CC lib/util/bit_array.o 00:04:38.668 CC lib/util/cpuset.o 00:04:38.668 CC lib/util/crc32.o 00:04:38.668 CC lib/util/crc16.o 00:04:38.668 CC lib/util/crc32c.o 00:04:38.668 CC lib/util/crc32_ieee.o 00:04:38.668 CC lib/util/crc64.o 00:04:38.668 CC lib/util/dif.o 00:04:38.668 CC lib/util/fd.o 00:04:38.668 CC lib/util/fd_group.o 00:04:38.668 CC lib/util/file.o 00:04:38.668 CC lib/util/iov.o 00:04:38.668 CC lib/util/hexlify.o 00:04:38.668 CC lib/util/math.o 00:04:38.668 CC lib/util/net.o 00:04:38.668 CC lib/util/pipe.o 00:04:38.668 CC lib/util/strerror_tls.o 00:04:38.668 CC lib/util/string.o 00:04:38.668 CC lib/ioat/ioat.o 00:04:38.668 CC lib/dma/dma.o 00:04:38.668 CC lib/util/uuid.o 00:04:38.668 CC lib/util/xor.o 00:04:38.668 CC lib/util/zipf.o 00:04:38.668 CC lib/util/md5.o 00:04:38.668 CXX lib/trace_parser/trace.o 00:04:38.928 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.928 CC lib/vfio_user/host/vfio_user.o 00:04:38.928 LIB libspdk_dma.a 00:04:38.928 SO libspdk_dma.so.5.0 00:04:38.928 SYMLINK libspdk_dma.so 00:04:38.928 LIB libspdk_ioat.a 00:04:38.928 SO libspdk_ioat.so.7.0 00:04:38.928 LIB libspdk_vfio_user.a 00:04:38.928 SYMLINK libspdk_ioat.so 00:04:38.928 SO libspdk_vfio_user.so.5.0 00:04:39.225 LIB libspdk_util.a 00:04:39.225 SYMLINK libspdk_vfio_user.so 00:04:39.225 SO libspdk_util.so.10.0 00:04:39.225 SYMLINK libspdk_util.so 00:04:39.484 CC lib/json/json_parse.o 00:04:39.484 CC lib/json/json_util.o 00:04:39.484 CC lib/json/json_write.o 00:04:39.484 CC lib/rdma_provider/common.o 00:04:39.484 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:39.484 CC lib/vmd/led.o 00:04:39.484 CC lib/vmd/vmd.o 00:04:39.484 CC lib/conf/conf.o 00:04:39.484 CC lib/idxd/idxd.o 00:04:39.484 CC lib/rdma_utils/rdma_utils.o 00:04:39.484 CC lib/idxd/idxd_user.o 00:04:39.484 CC lib/env_dpdk/env.o 00:04:39.484 CC lib/idxd/idxd_kernel.o 00:04:39.484 CC lib/env_dpdk/memory.o 00:04:39.484 CC lib/env_dpdk/pci.o 00:04:39.484 CC lib/env_dpdk/init.o 00:04:39.484 CC lib/env_dpdk/threads.o 00:04:39.484 CC lib/env_dpdk/pci_ioat.o 00:04:39.484 CC lib/env_dpdk/pci_virtio.o 00:04:39.484 CC lib/env_dpdk/pci_vmd.o 00:04:39.484 CC lib/env_dpdk/pci_idxd.o 00:04:39.484 CC lib/env_dpdk/pci_event.o 00:04:39.484 CC lib/env_dpdk/sigbus_handler.o 00:04:39.484 CC lib/env_dpdk/pci_dpdk.o 00:04:39.484 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:39.484 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.743 LIB libspdk_rdma_provider.a 00:04:39.743 SO libspdk_rdma_provider.so.6.0 00:04:39.743 LIB libspdk_conf.a 00:04:39.743 LIB libspdk_json.a 00:04:39.743 LIB libspdk_rdma_utils.a 00:04:39.743 SO libspdk_conf.so.6.0 00:04:39.743 SYMLINK libspdk_rdma_provider.so 00:04:40.003 SO libspdk_rdma_utils.so.1.0 00:04:40.003 SO libspdk_json.so.6.0 00:04:40.003 SYMLINK libspdk_conf.so 00:04:40.003 SYMLINK libspdk_rdma_utils.so 00:04:40.003 SYMLINK libspdk_json.so 00:04:40.003 LIB libspdk_idxd.a 00:04:40.003 LIB libspdk_vmd.a 00:04:40.003 SO libspdk_idxd.so.12.1 00:04:40.003 SO libspdk_vmd.so.6.0 00:04:40.263 SYMLINK libspdk_idxd.so 00:04:40.263 SYMLINK libspdk_vmd.so 00:04:40.263 CC lib/jsonrpc/jsonrpc_server.o 00:04:40.263 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:40.263 CC lib/jsonrpc/jsonrpc_client.o 00:04:40.263 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:40.263 LIB libspdk_trace_parser.a 00:04:40.263 SO libspdk_trace_parser.so.6.0 00:04:40.524 SYMLINK libspdk_trace_parser.so 00:04:40.524 LIB libspdk_jsonrpc.a 00:04:40.524 SO libspdk_jsonrpc.so.6.0 00:04:40.524 SYMLINK libspdk_jsonrpc.so 00:04:40.524 LIB libspdk_env_dpdk.a 00:04:40.785 SO libspdk_env_dpdk.so.15.0 00:04:40.785 SYMLINK libspdk_env_dpdk.so 00:04:40.785 CC lib/rpc/rpc.o 00:04:41.045 LIB libspdk_rpc.a 00:04:41.045 SO libspdk_rpc.so.6.0 00:04:41.045 SYMLINK libspdk_rpc.so 00:04:41.305 CC lib/notify/notify.o 00:04:41.305 CC lib/notify/notify_rpc.o 00:04:41.305 CC lib/keyring/keyring.o 00:04:41.305 CC lib/trace/trace.o 00:04:41.305 CC lib/trace/trace_flags.o 00:04:41.305 CC lib/keyring/keyring_rpc.o 00:04:41.305 CC lib/trace/trace_rpc.o 00:04:41.565 LIB libspdk_notify.a 00:04:41.565 SO libspdk_notify.so.6.0 00:04:41.565 LIB libspdk_keyring.a 00:04:41.565 LIB libspdk_trace.a 00:04:41.565 SO libspdk_keyring.so.2.0 00:04:41.565 SO libspdk_trace.so.11.0 00:04:41.565 SYMLINK libspdk_notify.so 00:04:41.825 SYMLINK libspdk_keyring.so 00:04:41.825 SYMLINK libspdk_trace.so 00:04:42.085 CC lib/thread/thread.o 00:04:42.085 CC lib/thread/iobuf.o 00:04:42.085 CC lib/sock/sock.o 00:04:42.085 CC lib/sock/sock_rpc.o 00:04:42.345 LIB libspdk_sock.a 00:04:42.345 SO libspdk_sock.so.10.0 00:04:42.345 SYMLINK libspdk_sock.so 00:04:42.914 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.914 CC lib/nvme/nvme_ctrlr.o 00:04:42.914 CC lib/nvme/nvme_fabric.o 00:04:42.914 CC lib/nvme/nvme_ns_cmd.o 00:04:42.914 CC lib/nvme/nvme_ns.o 00:04:42.914 CC lib/nvme/nvme_pcie_common.o 00:04:42.914 CC lib/nvme/nvme_pcie.o 00:04:42.914 CC lib/nvme/nvme_qpair.o 00:04:42.914 CC lib/nvme/nvme.o 00:04:42.914 CC lib/nvme/nvme_quirks.o 00:04:42.914 CC lib/nvme/nvme_transport.o 00:04:42.914 CC lib/nvme/nvme_discovery.o 00:04:42.915 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.915 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.915 CC lib/nvme/nvme_tcp.o 00:04:42.915 CC lib/nvme/nvme_opal.o 00:04:42.915 CC lib/nvme/nvme_io_msg.o 00:04:42.915 CC lib/nvme/nvme_poll_group.o 00:04:42.915 CC lib/nvme/nvme_zns.o 00:04:42.915 CC lib/nvme/nvme_stubs.o 00:04:42.915 CC lib/nvme/nvme_auth.o 00:04:42.915 CC lib/nvme/nvme_cuse.o 00:04:42.915 CC lib/nvme/nvme_vfio_user.o 00:04:42.915 CC lib/nvme/nvme_rdma.o 00:04:43.173 LIB libspdk_thread.a 00:04:43.173 SO libspdk_thread.so.10.1 00:04:43.173 SYMLINK libspdk_thread.so 00:04:43.431 CC lib/blob/blobstore.o 00:04:43.431 CC lib/blob/request.o 00:04:43.431 CC lib/blob/blob_bs_dev.o 00:04:43.431 CC lib/blob/zeroes.o 00:04:43.431 CC lib/virtio/virtio_vhost_user.o 00:04:43.431 CC lib/virtio/virtio.o 00:04:43.431 CC lib/accel/accel.o 00:04:43.431 CC lib/virtio/virtio_vfio_user.o 00:04:43.431 CC lib/accel/accel_rpc.o 00:04:43.431 CC lib/virtio/virtio_pci.o 00:04:43.431 CC lib/accel/accel_sw.o 00:04:43.431 CC lib/init/json_config.o 00:04:43.432 CC lib/init/subsystem.o 00:04:43.432 CC lib/init/rpc.o 00:04:43.432 CC lib/init/subsystem_rpc.o 00:04:43.432 CC lib/fsdev/fsdev.o 00:04:43.432 CC lib/fsdev/fsdev_rpc.o 00:04:43.432 CC lib/fsdev/fsdev_io.o 00:04:43.432 CC lib/vfu_tgt/tgt_endpoint.o 00:04:43.432 CC lib/vfu_tgt/tgt_rpc.o 00:04:43.690 LIB libspdk_init.a 00:04:43.690 SO libspdk_init.so.6.0 00:04:43.690 LIB libspdk_virtio.a 00:04:43.690 LIB libspdk_vfu_tgt.a 00:04:43.950 SO libspdk_virtio.so.7.0 00:04:43.950 SO libspdk_vfu_tgt.so.3.0 00:04:43.950 SYMLINK libspdk_init.so 00:04:43.950 SYMLINK libspdk_virtio.so 00:04:43.950 SYMLINK libspdk_vfu_tgt.so 00:04:43.950 LIB libspdk_fsdev.a 00:04:43.950 SO libspdk_fsdev.so.1.0 00:04:44.209 CC lib/event/app.o 00:04:44.209 CC lib/event/reactor.o 00:04:44.209 CC lib/event/log_rpc.o 00:04:44.209 CC lib/event/scheduler_static.o 00:04:44.209 CC lib/event/app_rpc.o 00:04:44.209 SYMLINK libspdk_fsdev.so 00:04:44.209 LIB libspdk_accel.a 00:04:44.469 SO libspdk_accel.so.16.0 00:04:44.469 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.469 SYMLINK libspdk_accel.so 00:04:44.469 LIB libspdk_event.a 00:04:44.469 SO libspdk_event.so.14.0 00:04:44.469 LIB libspdk_nvme.a 00:04:44.469 SYMLINK libspdk_event.so 00:04:44.727 SO libspdk_nvme.so.14.0 00:04:44.727 CC lib/bdev/bdev.o 00:04:44.727 CC lib/bdev/bdev_rpc.o 00:04:44.727 CC lib/bdev/bdev_zone.o 00:04:44.727 CC lib/bdev/part.o 00:04:44.727 CC lib/bdev/scsi_nvme.o 00:04:44.727 SYMLINK libspdk_nvme.so 00:04:44.987 LIB libspdk_fuse_dispatcher.a 00:04:44.987 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.987 SYMLINK libspdk_fuse_dispatcher.so 00:04:45.558 LIB libspdk_blob.a 00:04:45.558 SO libspdk_blob.so.11.0 00:04:45.817 SYMLINK libspdk_blob.so 00:04:46.077 CC lib/blobfs/blobfs.o 00:04:46.077 CC lib/lvol/lvol.o 00:04:46.077 CC lib/blobfs/tree.o 00:04:46.646 LIB libspdk_bdev.a 00:04:46.646 SO libspdk_bdev.so.16.0 00:04:46.646 LIB libspdk_blobfs.a 00:04:46.646 SYMLINK libspdk_bdev.so 00:04:46.646 SO libspdk_blobfs.so.10.0 00:04:46.646 LIB libspdk_lvol.a 00:04:46.646 SYMLINK libspdk_blobfs.so 00:04:46.646 SO libspdk_lvol.so.10.0 00:04:46.906 SYMLINK libspdk_lvol.so 00:04:46.906 CC lib/ublk/ublk.o 00:04:46.906 CC lib/ublk/ublk_rpc.o 00:04:46.906 CC lib/nvmf/ctrlr.o 00:04:46.906 CC lib/nvmf/ctrlr_discovery.o 00:04:46.906 CC lib/nvmf/ctrlr_bdev.o 00:04:46.906 CC lib/nvmf/subsystem.o 00:04:46.906 CC lib/nvmf/nvmf_rpc.o 00:04:46.906 CC lib/nvmf/nvmf.o 00:04:46.906 CC lib/nvmf/transport.o 00:04:46.906 CC lib/nvmf/tcp.o 00:04:46.906 CC lib/nvmf/stubs.o 00:04:46.906 CC lib/nvmf/mdns_server.o 00:04:46.906 CC lib/scsi/dev.o 00:04:46.906 CC lib/nvmf/vfio_user.o 00:04:46.906 CC lib/scsi/lun.o 00:04:46.906 CC lib/nbd/nbd.o 00:04:46.906 CC lib/nvmf/rdma.o 00:04:46.906 CC lib/scsi/port.o 00:04:46.906 CC lib/nvmf/auth.o 00:04:46.906 CC lib/nbd/nbd_rpc.o 00:04:46.906 CC lib/scsi/scsi.o 00:04:46.906 CC lib/scsi/scsi_bdev.o 00:04:46.906 CC lib/scsi/scsi_pr.o 00:04:46.906 CC lib/ftl/ftl_core.o 00:04:46.906 CC lib/scsi/scsi_rpc.o 00:04:46.906 CC lib/ftl/ftl_init.o 00:04:46.906 CC lib/scsi/task.o 00:04:46.906 CC lib/ftl/ftl_layout.o 00:04:46.906 CC lib/ftl/ftl_debug.o 00:04:46.906 CC lib/ftl/ftl_sb.o 00:04:46.906 CC lib/ftl/ftl_io.o 00:04:46.906 CC lib/ftl/ftl_l2p.o 00:04:46.906 CC lib/ftl/ftl_l2p_flat.o 00:04:46.906 CC lib/ftl/ftl_nv_cache.o 00:04:46.906 CC lib/ftl/ftl_band.o 00:04:46.906 CC lib/ftl/ftl_band_ops.o 00:04:46.906 CC lib/ftl/ftl_writer.o 00:04:46.906 CC lib/ftl/ftl_reloc.o 00:04:46.906 CC lib/ftl/ftl_rq.o 00:04:46.906 CC lib/ftl/ftl_l2p_cache.o 00:04:46.906 CC lib/ftl/ftl_p2l.o 00:04:46.906 CC lib/ftl/ftl_p2l_log.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:46.906 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:46.906 CC lib/ftl/utils/ftl_md.o 00:04:46.906 CC lib/ftl/utils/ftl_mempool.o 00:04:46.906 CC lib/ftl/utils/ftl_conf.o 00:04:46.906 CC lib/ftl/utils/ftl_bitmap.o 00:04:46.906 CC lib/ftl/utils/ftl_property.o 00:04:46.906 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:46.906 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.906 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.906 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:46.906 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.906 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.906 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.906 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.906 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.906 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.906 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:47.165 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:47.165 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:47.165 CC lib/ftl/base/ftl_base_dev.o 00:04:47.165 CC lib/ftl/base/ftl_base_bdev.o 00:04:47.165 CC lib/ftl/ftl_trace.o 00:04:47.733 LIB libspdk_nbd.a 00:04:47.733 SO libspdk_nbd.so.7.0 00:04:47.733 SYMLINK libspdk_nbd.so 00:04:47.733 LIB libspdk_scsi.a 00:04:47.733 SO libspdk_scsi.so.9.0 00:04:47.733 LIB libspdk_ublk.a 00:04:47.733 SO libspdk_ublk.so.3.0 00:04:47.733 SYMLINK libspdk_scsi.so 00:04:47.992 SYMLINK libspdk_ublk.so 00:04:47.993 LIB libspdk_ftl.a 00:04:48.252 CC lib/vhost/vhost_blk.o 00:04:48.252 CC lib/vhost/vhost.o 00:04:48.252 CC lib/vhost/vhost_rpc.o 00:04:48.252 CC lib/vhost/vhost_scsi.o 00:04:48.252 CC lib/vhost/rte_vhost_user.o 00:04:48.252 CC lib/iscsi/conn.o 00:04:48.252 CC lib/iscsi/init_grp.o 00:04:48.252 CC lib/iscsi/iscsi.o 00:04:48.252 CC lib/iscsi/param.o 00:04:48.252 CC lib/iscsi/portal_grp.o 00:04:48.252 CC lib/iscsi/tgt_node.o 00:04:48.252 CC lib/iscsi/iscsi_subsystem.o 00:04:48.252 CC lib/iscsi/iscsi_rpc.o 00:04:48.252 CC lib/iscsi/task.o 00:04:48.252 SO libspdk_ftl.so.9.0 00:04:48.512 SYMLINK libspdk_ftl.so 00:04:48.771 LIB libspdk_nvmf.a 00:04:49.031 SO libspdk_nvmf.so.19.0 00:04:49.031 LIB libspdk_vhost.a 00:04:49.031 SO libspdk_vhost.so.8.0 00:04:49.031 SYMLINK libspdk_vhost.so 00:04:49.031 SYMLINK libspdk_nvmf.so 00:04:49.031 LIB libspdk_iscsi.a 00:04:49.292 SO libspdk_iscsi.so.8.0 00:04:49.292 SYMLINK libspdk_iscsi.so 00:04:49.863 CC module/env_dpdk/env_dpdk_rpc.o 00:04:49.863 CC module/vfu_device/vfu_virtio.o 00:04:49.863 CC module/vfu_device/vfu_virtio_blk.o 00:04:49.863 CC module/vfu_device/vfu_virtio_scsi.o 00:04:49.863 CC module/vfu_device/vfu_virtio_rpc.o 00:04:49.863 CC module/vfu_device/vfu_virtio_fs.o 00:04:49.863 LIB libspdk_env_dpdk_rpc.a 00:04:49.863 CC module/keyring/file/keyring.o 00:04:49.863 CC module/keyring/file/keyring_rpc.o 00:04:49.863 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:49.863 CC module/blob/bdev/blob_bdev.o 00:04:49.863 CC module/accel/dsa/accel_dsa.o 00:04:49.863 CC module/accel/dsa/accel_dsa_rpc.o 00:04:49.863 CC module/accel/error/accel_error.o 00:04:49.863 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:49.863 CC module/accel/error/accel_error_rpc.o 00:04:49.863 CC module/accel/iaa/accel_iaa.o 00:04:49.863 CC module/keyring/linux/keyring.o 00:04:49.863 CC module/accel/ioat/accel_ioat.o 00:04:49.863 CC module/keyring/linux/keyring_rpc.o 00:04:49.863 CC module/accel/iaa/accel_iaa_rpc.o 00:04:49.863 CC module/accel/ioat/accel_ioat_rpc.o 00:04:49.863 CC module/sock/posix/posix.o 00:04:49.863 CC module/fsdev/aio/fsdev_aio.o 00:04:49.863 CC module/scheduler/gscheduler/gscheduler.o 00:04:49.863 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:49.863 CC module/fsdev/aio/linux_aio_mgr.o 00:04:49.863 SO libspdk_env_dpdk_rpc.so.6.0 00:04:50.122 SYMLINK libspdk_env_dpdk_rpc.so 00:04:50.122 LIB libspdk_keyring_file.a 00:04:50.122 LIB libspdk_keyring_linux.a 00:04:50.122 LIB libspdk_scheduler_gscheduler.a 00:04:50.122 LIB libspdk_scheduler_dpdk_governor.a 00:04:50.122 LIB libspdk_scheduler_dynamic.a 00:04:50.122 LIB libspdk_accel_ioat.a 00:04:50.122 SO libspdk_keyring_file.so.2.0 00:04:50.122 LIB libspdk_accel_error.a 00:04:50.122 SO libspdk_scheduler_gscheduler.so.4.0 00:04:50.122 SO libspdk_keyring_linux.so.1.0 00:04:50.122 LIB libspdk_accel_iaa.a 00:04:50.122 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:50.122 SO libspdk_scheduler_dynamic.so.4.0 00:04:50.123 SO libspdk_accel_ioat.so.6.0 00:04:50.123 SO libspdk_accel_error.so.2.0 00:04:50.123 SO libspdk_accel_iaa.so.3.0 00:04:50.123 LIB libspdk_blob_bdev.a 00:04:50.123 SYMLINK libspdk_keyring_file.so 00:04:50.123 SYMLINK libspdk_scheduler_gscheduler.so 00:04:50.123 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:50.123 SYMLINK libspdk_keyring_linux.so 00:04:50.381 SYMLINK libspdk_scheduler_dynamic.so 00:04:50.381 SO libspdk_blob_bdev.so.11.0 00:04:50.381 LIB libspdk_accel_dsa.a 00:04:50.381 SYMLINK libspdk_accel_error.so 00:04:50.381 SYMLINK libspdk_accel_ioat.so 00:04:50.381 SYMLINK libspdk_accel_iaa.so 00:04:50.381 SYMLINK libspdk_blob_bdev.so 00:04:50.382 SO libspdk_accel_dsa.so.5.0 00:04:50.382 LIB libspdk_vfu_device.a 00:04:50.382 SYMLINK libspdk_accel_dsa.so 00:04:50.382 SO libspdk_vfu_device.so.3.0 00:04:50.382 SYMLINK libspdk_vfu_device.so 00:04:50.640 LIB libspdk_fsdev_aio.a 00:04:50.640 SO libspdk_fsdev_aio.so.1.0 00:04:50.640 LIB libspdk_sock_posix.a 00:04:50.640 SYMLINK libspdk_fsdev_aio.so 00:04:50.640 SO libspdk_sock_posix.so.6.0 00:04:50.640 SYMLINK libspdk_sock_posix.so 00:04:50.640 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:50.640 CC module/bdev/delay/vbdev_delay.o 00:04:50.640 CC module/bdev/error/vbdev_error.o 00:04:50.640 CC module/bdev/gpt/vbdev_gpt.o 00:04:50.640 CC module/bdev/gpt/gpt.o 00:04:50.640 CC module/bdev/split/vbdev_split_rpc.o 00:04:50.641 CC module/bdev/error/vbdev_error_rpc.o 00:04:50.641 CC module/bdev/split/vbdev_split.o 00:04:50.641 CC module/bdev/aio/bdev_aio_rpc.o 00:04:50.641 CC module/bdev/aio/bdev_aio.o 00:04:50.641 CC module/blobfs/bdev/blobfs_bdev.o 00:04:50.641 CC module/bdev/ftl/bdev_ftl.o 00:04:50.641 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:50.641 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:50.641 CC module/bdev/iscsi/bdev_iscsi.o 00:04:50.641 CC module/bdev/raid/bdev_raid.o 00:04:50.641 CC module/bdev/raid/bdev_raid_rpc.o 00:04:50.641 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:50.641 CC module/bdev/null/bdev_null_rpc.o 00:04:50.641 CC module/bdev/null/bdev_null.o 00:04:50.641 CC module/bdev/raid/raid1.o 00:04:50.641 CC module/bdev/raid/bdev_raid_sb.o 00:04:50.641 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:50.641 CC module/bdev/lvol/vbdev_lvol.o 00:04:50.641 CC module/bdev/raid/raid0.o 00:04:50.641 CC module/bdev/raid/concat.o 00:04:50.641 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:50.641 CC module/bdev/nvme/bdev_nvme.o 00:04:50.641 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:50.641 CC module/bdev/malloc/bdev_malloc.o 00:04:50.641 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:50.641 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:50.641 CC module/bdev/nvme/nvme_rpc.o 00:04:50.641 CC module/bdev/nvme/bdev_mdns_client.o 00:04:50.899 CC module/bdev/nvme/vbdev_opal.o 00:04:50.899 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:50.899 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:50.899 CC module/bdev/passthru/vbdev_passthru.o 00:04:50.899 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:50.899 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:50.899 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:50.899 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:51.159 LIB libspdk_blobfs_bdev.a 00:04:51.159 LIB libspdk_bdev_gpt.a 00:04:51.159 SO libspdk_blobfs_bdev.so.6.0 00:04:51.159 LIB libspdk_bdev_error.a 00:04:51.159 LIB libspdk_bdev_null.a 00:04:51.159 LIB libspdk_bdev_split.a 00:04:51.159 SO libspdk_bdev_gpt.so.6.0 00:04:51.159 SO libspdk_bdev_error.so.6.0 00:04:51.159 SO libspdk_bdev_null.so.6.0 00:04:51.159 SO libspdk_bdev_split.so.6.0 00:04:51.159 SYMLINK libspdk_blobfs_bdev.so 00:04:51.159 LIB libspdk_bdev_ftl.a 00:04:51.159 LIB libspdk_bdev_passthru.a 00:04:51.159 SYMLINK libspdk_bdev_error.so 00:04:51.159 SYMLINK libspdk_bdev_gpt.so 00:04:51.159 SO libspdk_bdev_ftl.so.6.0 00:04:51.159 SYMLINK libspdk_bdev_null.so 00:04:51.159 LIB libspdk_bdev_aio.a 00:04:51.159 LIB libspdk_bdev_delay.a 00:04:51.159 SYMLINK libspdk_bdev_split.so 00:04:51.159 LIB libspdk_bdev_zone_block.a 00:04:51.159 SO libspdk_bdev_passthru.so.6.0 00:04:51.159 LIB libspdk_bdev_malloc.a 00:04:51.159 LIB libspdk_bdev_iscsi.a 00:04:51.159 SO libspdk_bdev_aio.so.6.0 00:04:51.159 SO libspdk_bdev_delay.so.6.0 00:04:51.159 SO libspdk_bdev_zone_block.so.6.0 00:04:51.159 SO libspdk_bdev_malloc.so.6.0 00:04:51.159 SYMLINK libspdk_bdev_ftl.so 00:04:51.159 SO libspdk_bdev_iscsi.so.6.0 00:04:51.159 SYMLINK libspdk_bdev_passthru.so 00:04:51.159 SYMLINK libspdk_bdev_zone_block.so 00:04:51.159 SYMLINK libspdk_bdev_aio.so 00:04:51.159 SYMLINK libspdk_bdev_delay.so 00:04:51.419 SYMLINK libspdk_bdev_malloc.so 00:04:51.419 SYMLINK libspdk_bdev_iscsi.so 00:04:51.419 LIB libspdk_bdev_lvol.a 00:04:51.419 LIB libspdk_bdev_virtio.a 00:04:51.419 SO libspdk_bdev_lvol.so.6.0 00:04:51.419 SO libspdk_bdev_virtio.so.6.0 00:04:51.419 SYMLINK libspdk_bdev_lvol.so 00:04:51.419 SYMLINK libspdk_bdev_virtio.so 00:04:51.678 LIB libspdk_bdev_raid.a 00:04:51.678 SO libspdk_bdev_raid.so.6.0 00:04:51.678 SYMLINK libspdk_bdev_raid.so 00:04:52.618 LIB libspdk_bdev_nvme.a 00:04:52.618 SO libspdk_bdev_nvme.so.7.0 00:04:52.618 SYMLINK libspdk_bdev_nvme.so 00:04:53.188 CC module/event/subsystems/iobuf/iobuf.o 00:04:53.188 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:53.188 CC module/event/subsystems/vmd/vmd.o 00:04:53.188 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:53.188 CC module/event/subsystems/fsdev/fsdev.o 00:04:53.188 CC module/event/subsystems/scheduler/scheduler.o 00:04:53.188 CC module/event/subsystems/sock/sock.o 00:04:53.188 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:53.188 CC module/event/subsystems/keyring/keyring.o 00:04:53.188 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:53.448 LIB libspdk_event_vhost_blk.a 00:04:53.448 LIB libspdk_event_vmd.a 00:04:53.448 LIB libspdk_event_vfu_tgt.a 00:04:53.448 LIB libspdk_event_keyring.a 00:04:53.448 LIB libspdk_event_sock.a 00:04:53.448 LIB libspdk_event_fsdev.a 00:04:53.448 LIB libspdk_event_iobuf.a 00:04:53.448 LIB libspdk_event_scheduler.a 00:04:53.448 SO libspdk_event_vhost_blk.so.3.0 00:04:53.448 SO libspdk_event_keyring.so.1.0 00:04:53.448 SO libspdk_event_vmd.so.6.0 00:04:53.448 SO libspdk_event_vfu_tgt.so.3.0 00:04:53.448 SO libspdk_event_scheduler.so.4.0 00:04:53.448 SO libspdk_event_sock.so.5.0 00:04:53.448 SO libspdk_event_fsdev.so.1.0 00:04:53.448 SO libspdk_event_iobuf.so.3.0 00:04:53.448 SYMLINK libspdk_event_vhost_blk.so 00:04:53.448 SYMLINK libspdk_event_keyring.so 00:04:53.448 SYMLINK libspdk_event_vmd.so 00:04:53.448 SYMLINK libspdk_event_vfu_tgt.so 00:04:53.448 SYMLINK libspdk_event_scheduler.so 00:04:53.448 SYMLINK libspdk_event_sock.so 00:04:53.448 SYMLINK libspdk_event_fsdev.so 00:04:53.448 SYMLINK libspdk_event_iobuf.so 00:04:53.708 CC module/event/subsystems/accel/accel.o 00:04:53.968 LIB libspdk_event_accel.a 00:04:53.968 SO libspdk_event_accel.so.6.0 00:04:53.968 SYMLINK libspdk_event_accel.so 00:04:54.228 CC module/event/subsystems/bdev/bdev.o 00:04:54.487 LIB libspdk_event_bdev.a 00:04:54.487 SO libspdk_event_bdev.so.6.0 00:04:54.487 SYMLINK libspdk_event_bdev.so 00:04:55.057 CC module/event/subsystems/nbd/nbd.o 00:04:55.057 CC module/event/subsystems/scsi/scsi.o 00:04:55.057 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:55.057 CC module/event/subsystems/ublk/ublk.o 00:04:55.057 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:55.057 LIB libspdk_event_nbd.a 00:04:55.057 LIB libspdk_event_ublk.a 00:04:55.057 SO libspdk_event_nbd.so.6.0 00:04:55.057 LIB libspdk_event_scsi.a 00:04:55.057 SO libspdk_event_ublk.so.3.0 00:04:55.057 SO libspdk_event_scsi.so.6.0 00:04:55.057 LIB libspdk_event_nvmf.a 00:04:55.057 SYMLINK libspdk_event_nbd.so 00:04:55.057 SO libspdk_event_nvmf.so.6.0 00:04:55.057 SYMLINK libspdk_event_ublk.so 00:04:55.057 SYMLINK libspdk_event_scsi.so 00:04:55.317 SYMLINK libspdk_event_nvmf.so 00:04:55.577 CC module/event/subsystems/iscsi/iscsi.o 00:04:55.577 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:55.577 LIB libspdk_event_vhost_scsi.a 00:04:55.577 LIB libspdk_event_iscsi.a 00:04:55.577 SO libspdk_event_vhost_scsi.so.3.0 00:04:55.577 SO libspdk_event_iscsi.so.6.0 00:04:55.837 SYMLINK libspdk_event_vhost_scsi.so 00:04:55.837 SYMLINK libspdk_event_iscsi.so 00:04:55.837 SO libspdk.so.6.0 00:04:55.837 SYMLINK libspdk.so 00:04:56.420 CC test/rpc_client/rpc_client_test.o 00:04:56.420 CXX app/trace/trace.o 00:04:56.420 CC app/spdk_lspci/spdk_lspci.o 00:04:56.420 CC app/spdk_top/spdk_top.o 00:04:56.420 TEST_HEADER include/spdk/accel.h 00:04:56.420 TEST_HEADER include/spdk/accel_module.h 00:04:56.420 CC app/spdk_nvme_perf/perf.o 00:04:56.420 TEST_HEADER include/spdk/assert.h 00:04:56.420 TEST_HEADER include/spdk/bdev.h 00:04:56.420 TEST_HEADER include/spdk/barrier.h 00:04:56.420 TEST_HEADER include/spdk/base64.h 00:04:56.420 TEST_HEADER include/spdk/bdev_module.h 00:04:56.420 CC app/trace_record/trace_record.o 00:04:56.420 CC app/spdk_nvme_identify/identify.o 00:04:56.420 TEST_HEADER include/spdk/bdev_zone.h 00:04:56.420 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.420 TEST_HEADER include/spdk/bit_array.h 00:04:56.420 TEST_HEADER include/spdk/bit_pool.h 00:04:56.420 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:56.420 TEST_HEADER include/spdk/blob_bdev.h 00:04:56.420 TEST_HEADER include/spdk/blobfs.h 00:04:56.420 TEST_HEADER include/spdk/blob.h 00:04:56.420 TEST_HEADER include/spdk/conf.h 00:04:56.420 TEST_HEADER include/spdk/config.h 00:04:56.420 TEST_HEADER include/spdk/cpuset.h 00:04:56.420 TEST_HEADER include/spdk/crc16.h 00:04:56.420 TEST_HEADER include/spdk/crc64.h 00:04:56.420 TEST_HEADER include/spdk/crc32.h 00:04:56.420 TEST_HEADER include/spdk/dif.h 00:04:56.420 TEST_HEADER include/spdk/endian.h 00:04:56.420 TEST_HEADER include/spdk/dma.h 00:04:56.420 TEST_HEADER include/spdk/env.h 00:04:56.420 TEST_HEADER include/spdk/env_dpdk.h 00:04:56.420 TEST_HEADER include/spdk/event.h 00:04:56.420 TEST_HEADER include/spdk/fd_group.h 00:04:56.420 TEST_HEADER include/spdk/fsdev.h 00:04:56.420 TEST_HEADER include/spdk/fd.h 00:04:56.420 TEST_HEADER include/spdk/fsdev_module.h 00:04:56.420 TEST_HEADER include/spdk/file.h 00:04:56.420 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:56.420 TEST_HEADER include/spdk/ftl.h 00:04:56.420 TEST_HEADER include/spdk/gpt_spec.h 00:04:56.420 TEST_HEADER include/spdk/idxd.h 00:04:56.420 TEST_HEADER include/spdk/histogram_data.h 00:04:56.420 TEST_HEADER include/spdk/hexlify.h 00:04:56.420 TEST_HEADER include/spdk/idxd_spec.h 00:04:56.420 TEST_HEADER include/spdk/init.h 00:04:56.420 TEST_HEADER include/spdk/ioat.h 00:04:56.420 TEST_HEADER include/spdk/ioat_spec.h 00:04:56.420 TEST_HEADER include/spdk/iscsi_spec.h 00:04:56.420 TEST_HEADER include/spdk/json.h 00:04:56.420 TEST_HEADER include/spdk/keyring.h 00:04:56.420 TEST_HEADER include/spdk/jsonrpc.h 00:04:56.420 TEST_HEADER include/spdk/log.h 00:04:56.420 TEST_HEADER include/spdk/likely.h 00:04:56.420 TEST_HEADER include/spdk/keyring_module.h 00:04:56.420 TEST_HEADER include/spdk/lvol.h 00:04:56.420 TEST_HEADER include/spdk/memory.h 00:04:56.420 TEST_HEADER include/spdk/md5.h 00:04:56.420 TEST_HEADER include/spdk/mmio.h 00:04:56.420 TEST_HEADER include/spdk/nbd.h 00:04:56.420 TEST_HEADER include/spdk/net.h 00:04:56.420 TEST_HEADER include/spdk/notify.h 00:04:56.420 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:56.420 TEST_HEADER include/spdk/nvme.h 00:04:56.420 TEST_HEADER include/spdk/nvme_intel.h 00:04:56.420 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:56.420 TEST_HEADER include/spdk/nvme_spec.h 00:04:56.420 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:56.420 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:56.420 CC app/spdk_dd/spdk_dd.o 00:04:56.420 TEST_HEADER include/spdk/nvme_zns.h 00:04:56.420 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:56.420 CC app/nvmf_tgt/nvmf_main.o 00:04:56.420 TEST_HEADER include/spdk/nvmf.h 00:04:56.420 TEST_HEADER include/spdk/nvmf_spec.h 00:04:56.420 TEST_HEADER include/spdk/nvmf_transport.h 00:04:56.420 TEST_HEADER include/spdk/opal_spec.h 00:04:56.420 TEST_HEADER include/spdk/pci_ids.h 00:04:56.420 TEST_HEADER include/spdk/opal.h 00:04:56.420 TEST_HEADER include/spdk/pipe.h 00:04:56.420 TEST_HEADER include/spdk/queue.h 00:04:56.420 TEST_HEADER include/spdk/reduce.h 00:04:56.420 TEST_HEADER include/spdk/rpc.h 00:04:56.420 TEST_HEADER include/spdk/scheduler.h 00:04:56.420 TEST_HEADER include/spdk/scsi_spec.h 00:04:56.420 TEST_HEADER include/spdk/sock.h 00:04:56.420 TEST_HEADER include/spdk/stdinc.h 00:04:56.420 TEST_HEADER include/spdk/scsi.h 00:04:56.420 TEST_HEADER include/spdk/thread.h 00:04:56.420 TEST_HEADER include/spdk/string.h 00:04:56.420 CC app/iscsi_tgt/iscsi_tgt.o 00:04:56.420 TEST_HEADER include/spdk/trace.h 00:04:56.420 TEST_HEADER include/spdk/trace_parser.h 00:04:56.420 TEST_HEADER include/spdk/util.h 00:04:56.420 TEST_HEADER include/spdk/tree.h 00:04:56.420 TEST_HEADER include/spdk/ublk.h 00:04:56.420 TEST_HEADER include/spdk/uuid.h 00:04:56.420 TEST_HEADER include/spdk/version.h 00:04:56.420 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:56.420 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:56.420 TEST_HEADER include/spdk/vhost.h 00:04:56.420 TEST_HEADER include/spdk/vmd.h 00:04:56.420 TEST_HEADER include/spdk/xor.h 00:04:56.420 CXX test/cpp_headers/accel.o 00:04:56.420 TEST_HEADER include/spdk/zipf.h 00:04:56.420 CXX test/cpp_headers/accel_module.o 00:04:56.420 CXX test/cpp_headers/assert.o 00:04:56.420 CXX test/cpp_headers/barrier.o 00:04:56.420 CXX test/cpp_headers/bdev.o 00:04:56.420 CXX test/cpp_headers/bdev_zone.o 00:04:56.420 CXX test/cpp_headers/bdev_module.o 00:04:56.420 CXX test/cpp_headers/base64.o 00:04:56.420 CXX test/cpp_headers/bit_array.o 00:04:56.420 CXX test/cpp_headers/blob_bdev.o 00:04:56.420 CXX test/cpp_headers/bit_pool.o 00:04:56.420 CC app/spdk_tgt/spdk_tgt.o 00:04:56.420 CXX test/cpp_headers/blob.o 00:04:56.420 CXX test/cpp_headers/blobfs.o 00:04:56.420 CXX test/cpp_headers/blobfs_bdev.o 00:04:56.420 CXX test/cpp_headers/config.o 00:04:56.420 CXX test/cpp_headers/crc16.o 00:04:56.420 CXX test/cpp_headers/conf.o 00:04:56.420 CXX test/cpp_headers/cpuset.o 00:04:56.420 CXX test/cpp_headers/crc32.o 00:04:56.420 CXX test/cpp_headers/crc64.o 00:04:56.420 CXX test/cpp_headers/dif.o 00:04:56.420 CXX test/cpp_headers/dma.o 00:04:56.420 CXX test/cpp_headers/endian.o 00:04:56.420 CXX test/cpp_headers/env_dpdk.o 00:04:56.420 CXX test/cpp_headers/env.o 00:04:56.420 CXX test/cpp_headers/event.o 00:04:56.420 CXX test/cpp_headers/file.o 00:04:56.420 CXX test/cpp_headers/fd_group.o 00:04:56.420 CXX test/cpp_headers/fd.o 00:04:56.420 CXX test/cpp_headers/fsdev.o 00:04:56.420 CXX test/cpp_headers/fsdev_module.o 00:04:56.420 CXX test/cpp_headers/ftl.o 00:04:56.420 CXX test/cpp_headers/fuse_dispatcher.o 00:04:56.420 CXX test/cpp_headers/gpt_spec.o 00:04:56.420 CXX test/cpp_headers/hexlify.o 00:04:56.420 CXX test/cpp_headers/idxd.o 00:04:56.420 CXX test/cpp_headers/histogram_data.o 00:04:56.420 CXX test/cpp_headers/idxd_spec.o 00:04:56.420 CXX test/cpp_headers/init.o 00:04:56.420 CXX test/cpp_headers/ioat.o 00:04:56.420 CXX test/cpp_headers/iscsi_spec.o 00:04:56.420 CXX test/cpp_headers/ioat_spec.o 00:04:56.420 CXX test/cpp_headers/json.o 00:04:56.420 CXX test/cpp_headers/jsonrpc.o 00:04:56.420 CXX test/cpp_headers/keyring_module.o 00:04:56.420 CXX test/cpp_headers/keyring.o 00:04:56.420 CXX test/cpp_headers/likely.o 00:04:56.420 CXX test/cpp_headers/log.o 00:04:56.420 CXX test/cpp_headers/lvol.o 00:04:56.420 CXX test/cpp_headers/md5.o 00:04:56.420 CXX test/cpp_headers/memory.o 00:04:56.420 CXX test/cpp_headers/mmio.o 00:04:56.420 CXX test/cpp_headers/nbd.o 00:04:56.420 CXX test/cpp_headers/net.o 00:04:56.420 CXX test/cpp_headers/notify.o 00:04:56.420 CXX test/cpp_headers/nvme.o 00:04:56.420 CXX test/cpp_headers/nvme_intel.o 00:04:56.420 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.420 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.420 CXX test/cpp_headers/nvme_spec.o 00:04:56.420 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.420 CXX test/cpp_headers/nvme_zns.o 00:04:56.420 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.420 CXX test/cpp_headers/nvmf.o 00:04:56.420 CXX test/cpp_headers/nvmf_spec.o 00:04:56.420 CXX test/cpp_headers/nvmf_transport.o 00:04:56.420 CXX test/cpp_headers/opal.o 00:04:56.420 CC examples/util/zipf/zipf.o 00:04:56.420 CC test/app/histogram_perf/histogram_perf.o 00:04:56.420 CC test/app/stub/stub.o 00:04:56.420 CC examples/ioat/verify/verify.o 00:04:56.420 CC test/app/jsoncat/jsoncat.o 00:04:56.420 CC test/thread/poller_perf/poller_perf.o 00:04:56.420 CC test/dma/test_dma/test_dma.o 00:04:56.420 CC app/fio/nvme/fio_plugin.o 00:04:56.420 CC test/env/vtophys/vtophys.o 00:04:56.421 CC test/env/pci/pci_ut.o 00:04:56.421 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:56.421 CC test/env/memory/memory_ut.o 00:04:56.421 CC examples/ioat/perf/perf.o 00:04:56.421 CC test/app/bdev_svc/bdev_svc.o 00:04:56.689 CC app/fio/bdev/fio_plugin.o 00:04:56.689 LINK spdk_lspci 00:04:56.963 LINK nvmf_tgt 00:04:56.963 LINK interrupt_tgt 00:04:56.963 LINK spdk_nvme_discover 00:04:56.963 LINK rpc_client_test 00:04:56.963 CC test/env/mem_callbacks/mem_callbacks.o 00:04:56.963 LINK spdk_trace_record 00:04:56.963 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:56.963 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:56.963 LINK iscsi_tgt 00:04:56.963 LINK histogram_perf 00:04:56.963 CXX test/cpp_headers/opal_spec.o 00:04:56.963 CXX test/cpp_headers/pci_ids.o 00:04:56.963 CXX test/cpp_headers/pipe.o 00:04:56.963 CXX test/cpp_headers/queue.o 00:04:56.963 LINK poller_perf 00:04:56.963 CXX test/cpp_headers/reduce.o 00:04:56.963 CXX test/cpp_headers/rpc.o 00:04:56.963 CXX test/cpp_headers/scheduler.o 00:04:56.963 CXX test/cpp_headers/scsi.o 00:04:56.963 LINK spdk_tgt 00:04:56.963 CXX test/cpp_headers/scsi_spec.o 00:04:56.963 LINK vtophys 00:04:56.963 CXX test/cpp_headers/sock.o 00:04:56.963 CXX test/cpp_headers/stdinc.o 00:04:56.963 CXX test/cpp_headers/string.o 00:04:56.963 CXX test/cpp_headers/thread.o 00:04:56.963 LINK jsoncat 00:04:56.963 CXX test/cpp_headers/trace.o 00:04:56.963 CXX test/cpp_headers/trace_parser.o 00:04:56.963 CXX test/cpp_headers/tree.o 00:04:56.963 CXX test/cpp_headers/ublk.o 00:04:56.963 CXX test/cpp_headers/util.o 00:04:56.963 CXX test/cpp_headers/uuid.o 00:04:56.963 LINK zipf 00:04:56.963 CXX test/cpp_headers/version.o 00:04:56.963 CXX test/cpp_headers/vfio_user_pci.o 00:04:56.963 CXX test/cpp_headers/vfio_user_spec.o 00:04:56.963 LINK env_dpdk_post_init 00:04:57.222 CXX test/cpp_headers/vhost.o 00:04:57.222 CXX test/cpp_headers/vmd.o 00:04:57.222 CXX test/cpp_headers/xor.o 00:04:57.222 CXX test/cpp_headers/zipf.o 00:04:57.222 LINK stub 00:04:57.222 LINK verify 00:04:57.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:57.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:57.222 LINK bdev_svc 00:04:57.222 LINK ioat_perf 00:04:57.222 LINK spdk_trace 00:04:57.222 LINK spdk_dd 00:04:57.482 LINK pci_ut 00:04:57.482 LINK nvme_fuzz 00:04:57.482 CC test/event/event_perf/event_perf.o 00:04:57.482 LINK test_dma 00:04:57.482 CC test/event/reactor_perf/reactor_perf.o 00:04:57.482 CC test/event/reactor/reactor.o 00:04:57.482 CC test/event/app_repeat/app_repeat.o 00:04:57.482 CC examples/idxd/perf/perf.o 00:04:57.482 CC examples/sock/hello_world/hello_sock.o 00:04:57.482 CC examples/vmd/lsvmd/lsvmd.o 00:04:57.482 CC test/event/scheduler/scheduler.o 00:04:57.482 CC examples/vmd/led/led.o 00:04:57.741 CC examples/thread/thread/thread_ex.o 00:04:57.741 LINK spdk_nvme_perf 00:04:57.741 LINK mem_callbacks 00:04:57.741 LINK spdk_bdev 00:04:57.741 LINK vhost_fuzz 00:04:57.741 LINK spdk_nvme_identify 00:04:57.741 LINK spdk_nvme 00:04:57.741 CC app/vhost/vhost.o 00:04:57.741 LINK reactor_perf 00:04:57.741 LINK spdk_top 00:04:57.741 LINK event_perf 00:04:57.741 LINK reactor 00:04:57.741 LINK app_repeat 00:04:57.741 LINK lsvmd 00:04:57.741 LINK led 00:04:57.741 LINK scheduler 00:04:57.741 LINK hello_sock 00:04:57.741 LINK thread 00:04:58.000 LINK vhost 00:04:58.000 LINK idxd_perf 00:04:58.000 LINK memory_ut 00:04:58.000 CC test/nvme/compliance/nvme_compliance.o 00:04:58.000 CC test/nvme/aer/aer.o 00:04:58.000 CC test/nvme/e2edp/nvme_dp.o 00:04:58.000 CC test/nvme/simple_copy/simple_copy.o 00:04:58.000 CC test/nvme/sgl/sgl.o 00:04:58.000 CC test/nvme/reserve/reserve.o 00:04:58.000 CC test/nvme/overhead/overhead.o 00:04:58.000 CC test/nvme/startup/startup.o 00:04:58.000 CC test/nvme/fdp/fdp.o 00:04:58.000 CC test/nvme/err_injection/err_injection.o 00:04:58.000 CC test/nvme/reset/reset.o 00:04:58.000 CC test/nvme/boot_partition/boot_partition.o 00:04:58.000 CC test/nvme/cuse/cuse.o 00:04:58.000 CC test/nvme/fused_ordering/fused_ordering.o 00:04:58.000 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:58.000 CC test/nvme/connect_stress/connect_stress.o 00:04:58.000 CC test/accel/dif/dif.o 00:04:58.000 CC test/blobfs/mkfs/mkfs.o 00:04:58.259 CC test/lvol/esnap/esnap.o 00:04:58.259 LINK startup 00:04:58.259 LINK boot_partition 00:04:58.259 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:58.259 LINK doorbell_aers 00:04:58.259 LINK err_injection 00:04:58.259 CC examples/nvme/hello_world/hello_world.o 00:04:58.259 CC examples/nvme/arbitration/arbitration.o 00:04:58.259 CC examples/nvme/reconnect/reconnect.o 00:04:58.259 CC examples/nvme/abort/abort.o 00:04:58.259 LINK connect_stress 00:04:58.259 LINK reserve 00:04:58.259 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:58.259 LINK simple_copy 00:04:58.259 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:58.259 CC examples/nvme/hotplug/hotplug.o 00:04:58.259 LINK fused_ordering 00:04:58.259 LINK sgl 00:04:58.259 LINK mkfs 00:04:58.259 LINK reset 00:04:58.259 LINK nvme_dp 00:04:58.259 LINK aer 00:04:58.259 LINK overhead 00:04:58.259 LINK nvme_compliance 00:04:58.259 CC examples/accel/perf/accel_perf.o 00:04:58.518 LINK fdp 00:04:58.518 CC examples/blob/hello_world/hello_blob.o 00:04:58.518 CC examples/blob/cli/blobcli.o 00:04:58.518 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:58.518 LINK cmb_copy 00:04:58.518 LINK pmr_persistence 00:04:58.518 LINK hello_world 00:04:58.518 LINK iscsi_fuzz 00:04:58.518 LINK hotplug 00:04:58.518 LINK arbitration 00:04:58.518 LINK reconnect 00:04:58.518 LINK hello_blob 00:04:58.518 LINK abort 00:04:58.778 LINK nvme_manage 00:04:58.778 LINK dif 00:04:58.778 LINK hello_fsdev 00:04:58.778 LINK accel_perf 00:04:58.778 LINK blobcli 00:04:59.038 LINK cuse 00:04:59.296 CC test/bdev/bdevio/bdevio.o 00:04:59.296 CC examples/bdev/hello_world/hello_bdev.o 00:04:59.296 CC examples/bdev/bdevperf/bdevperf.o 00:04:59.555 LINK hello_bdev 00:04:59.555 LINK bdevio 00:04:59.815 LINK bdevperf 00:05:00.383 CC examples/nvmf/nvmf/nvmf.o 00:05:00.643 LINK nvmf 00:05:01.581 LINK esnap 00:05:01.841 00:05:01.841 real 0m53.762s 00:05:01.841 user 6m46.479s 00:05:01.841 sys 2m45.080s 00:05:01.841 12:26:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:01.841 12:26:27 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.841 ************************************ 00:05:01.841 END TEST make 00:05:01.841 ************************************ 00:05:02.100 12:26:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:02.100 12:26:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:02.100 12:26:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:02.100 12:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.100 12:26:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:02.100 12:26:27 -- pm/common@44 -- $ pid=56545 00:05:02.100 12:26:27 -- pm/common@50 -- $ kill -TERM 56545 00:05:02.100 12:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.100 12:26:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:02.100 12:26:27 -- pm/common@44 -- $ pid=56546 00:05:02.100 12:26:27 -- pm/common@50 -- $ kill -TERM 56546 00:05:02.100 12:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.100 12:26:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:02.100 12:26:27 -- pm/common@44 -- $ pid=56548 00:05:02.100 12:26:27 -- pm/common@50 -- $ kill -TERM 56548 00:05:02.100 12:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.100 12:26:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:02.100 12:26:27 -- pm/common@44 -- $ pid=56567 00:05:02.100 12:26:27 -- pm/common@50 -- $ sudo -E kill -TERM 56567 00:05:02.100 12:26:28 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.101 12:26:28 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.101 12:26:28 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.101 12:26:28 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.101 12:26:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.101 12:26:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.101 12:26:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.101 12:26:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.101 12:26:28 -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.101 12:26:28 -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.101 12:26:28 -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.101 12:26:28 -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.101 12:26:28 -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.101 12:26:28 -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.101 12:26:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.101 12:26:28 -- scripts/common.sh@344 -- # case "$op" in 00:05:02.101 12:26:28 -- scripts/common.sh@345 -- # : 1 00:05:02.101 12:26:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.101 12:26:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.101 12:26:28 -- scripts/common.sh@365 -- # decimal 1 00:05:02.101 12:26:28 -- scripts/common.sh@353 -- # local d=1 00:05:02.101 12:26:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.101 12:26:28 -- scripts/common.sh@355 -- # echo 1 00:05:02.101 12:26:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.101 12:26:28 -- scripts/common.sh@366 -- # decimal 2 00:05:02.101 12:26:28 -- scripts/common.sh@353 -- # local d=2 00:05:02.101 12:26:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.101 12:26:28 -- scripts/common.sh@355 -- # echo 2 00:05:02.101 12:26:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.101 12:26:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.101 12:26:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.101 12:26:28 -- scripts/common.sh@368 -- # return 0 00:05:02.101 12:26:28 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.101 12:26:28 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.101 --rc genhtml_branch_coverage=1 00:05:02.101 --rc genhtml_function_coverage=1 00:05:02.101 --rc genhtml_legend=1 00:05:02.101 --rc geninfo_all_blocks=1 00:05:02.101 --rc geninfo_unexecuted_blocks=1 00:05:02.101 00:05:02.101 ' 00:05:02.101 12:26:28 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.101 --rc genhtml_branch_coverage=1 00:05:02.101 --rc genhtml_function_coverage=1 00:05:02.101 --rc genhtml_legend=1 00:05:02.101 --rc geninfo_all_blocks=1 00:05:02.101 --rc geninfo_unexecuted_blocks=1 00:05:02.101 00:05:02.101 ' 00:05:02.101 12:26:28 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.101 --rc genhtml_branch_coverage=1 00:05:02.101 --rc genhtml_function_coverage=1 00:05:02.101 --rc genhtml_legend=1 00:05:02.101 --rc geninfo_all_blocks=1 00:05:02.101 --rc geninfo_unexecuted_blocks=1 00:05:02.101 00:05:02.101 ' 00:05:02.101 12:26:28 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.101 --rc genhtml_branch_coverage=1 00:05:02.101 --rc genhtml_function_coverage=1 00:05:02.101 --rc genhtml_legend=1 00:05:02.101 --rc geninfo_all_blocks=1 00:05:02.101 --rc geninfo_unexecuted_blocks=1 00:05:02.101 00:05:02.101 ' 00:05:02.101 12:26:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.101 12:26:28 -- nvmf/common.sh@7 -- # uname -s 00:05:02.101 12:26:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.101 12:26:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.101 12:26:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.101 12:26:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.101 12:26:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.101 12:26:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.101 12:26:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.101 12:26:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.101 12:26:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.101 12:26:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.360 12:26:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:02.360 12:26:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:02.360 12:26:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.360 12:26:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.360 12:26:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.360 12:26:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.360 12:26:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.360 12:26:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.360 12:26:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.360 12:26:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.360 12:26:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.360 12:26:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.360 12:26:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.360 12:26:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.360 12:26:28 -- paths/export.sh@5 -- # export PATH 00:05:02.361 12:26:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.361 12:26:28 -- nvmf/common.sh@51 -- # : 0 00:05:02.361 12:26:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.361 12:26:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.361 12:26:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.361 12:26:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.361 12:26:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.361 12:26:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.361 12:26:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.361 12:26:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.361 12:26:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.361 12:26:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:02.361 12:26:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:02.361 12:26:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:02.361 12:26:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:02.361 12:26:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:02.361 12:26:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:02.361 12:26:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:02.361 12:26:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:02.361 12:26:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:02.361 12:26:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:02.361 12:26:28 -- spdk/autotest.sh@48 -- # udevadm_pid=134998 00:05:02.361 12:26:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:02.361 12:26:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:02.361 12:26:28 -- pm/common@17 -- # local monitor 00:05:02.361 12:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.361 12:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.361 12:26:28 -- pm/common@21 -- # date +%s 00:05:02.361 12:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.361 12:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.361 12:26:28 -- pm/common@21 -- # date +%s 00:05:02.361 12:26:28 -- pm/common@25 -- # sleep 1 00:05:02.361 12:26:28 -- pm/common@21 -- # date +%s 00:05:02.361 12:26:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348388 00:05:02.361 12:26:28 -- pm/common@21 -- # date +%s 00:05:02.361 12:26:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348388 00:05:02.361 12:26:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348388 00:05:02.361 12:26:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348388 00:05:02.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348388_collect-cpu-load.pm.log 00:05:02.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348388_collect-vmstat.pm.log 00:05:02.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348388_collect-cpu-temp.pm.log 00:05:02.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348388_collect-bmc-pm.bmc.pm.log 00:05:03.300 12:26:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:03.300 12:26:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:03.300 12:26:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.300 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:05:03.300 12:26:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:03.300 12:26:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:03.300 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:05:03.300 12:26:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:03.300 12:26:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:03.300 12:26:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:03.300 12:26:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:03.300 12:26:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:03.300 12:26:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:03.300 12:26:29 -- common/autotest_common.sh@1455 -- # uname 00:05:03.300 12:26:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:03.300 12:26:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:03.300 12:26:29 -- common/autotest_common.sh@1475 -- # uname 00:05:03.300 12:26:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:03.300 12:26:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:03.300 12:26:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:03.559 lcov: LCOV version 1.15 00:05:03.559 12:26:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:21.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:21.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:28.222 12:26:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:28.222 12:26:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.222 12:26:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.222 12:26:53 -- spdk/autotest.sh@78 -- # rm -f 00:05:28.222 12:26:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:31.517 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:05:31.517 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:31.517 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:31.517 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:31.517 12:26:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:31.517 12:26:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:31.517 12:26:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:31.517 12:26:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:31.517 12:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.517 12:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:31.517 12:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:31.517 12:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.517 12:26:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.517 12:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.517 12:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:31.517 12:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:31.517 12:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.517 12:26:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.517 12:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.517 12:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:31.517 12:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:31.517 12:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:31.517 12:26:57 -- common/autotest_common.sh@1651 -- # [[ host-managed != none ]] 00:05:31.517 12:26:57 -- common/autotest_common.sh@1660 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:05:31.517 12:26:57 -- spdk/autotest.sh@85 -- # (( 1 > 0 )) 00:05:31.517 12:26:57 -- spdk/autotest.sh@90 -- # export PCI_BLOCKED=0000:5f:00.0 00:05:31.517 12:26:57 -- spdk/autotest.sh@90 -- # PCI_BLOCKED=0000:5f:00.0 00:05:31.517 12:26:57 -- spdk/autotest.sh@91 -- # export PCI_ZONED=0000:5f:00.0 00:05:31.517 12:26:57 -- spdk/autotest.sh@91 -- # PCI_ZONED=0000:5f:00.0 00:05:31.517 12:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.517 12:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.517 12:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:31.517 12:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:31.517 12:26:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.517 No valid GPT data, bailing 00:05:31.517 12:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.517 12:26:57 -- scripts/common.sh@394 -- # pt= 00:05:31.517 12:26:57 -- scripts/common.sh@395 -- # return 1 00:05:31.517 12:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.517 1+0 records in 00:05:31.517 1+0 records out 00:05:31.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00196006 s, 535 MB/s 00:05:31.518 12:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.518 12:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.518 12:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:31.518 12:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:31.518 12:26:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:31.518 No valid GPT data, bailing 00:05:31.518 12:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:31.518 12:26:57 -- scripts/common.sh@394 -- # pt= 00:05:31.518 12:26:57 -- scripts/common.sh@395 -- # return 1 00:05:31.518 12:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:31.518 1+0 records in 00:05:31.518 1+0 records out 00:05:31.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532121 s, 197 MB/s 00:05:31.518 12:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.518 12:26:57 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:05:31.518 12:26:57 -- spdk/autotest.sh@99 -- # continue 00:05:31.518 12:26:57 -- spdk/autotest.sh@105 -- # sync 00:05:31.518 12:26:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.518 12:26:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.518 12:26:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.798 12:27:02 -- spdk/autotest.sh@111 -- # uname -s 00:05:36.798 12:27:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:36.798 12:27:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:36.798 12:27:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:40.093 Hugepages 00:05:40.093 node hugesize free / total 00:05:40.093 node0 1048576kB 0 / 0 00:05:40.093 node0 2048kB 0 / 0 00:05:40.093 node1 1048576kB 0 / 0 00:05:40.093 node1 2048kB 0 / 0 00:05:40.093 00:05:40.093 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.093 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:40.093 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:40.093 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:40.093 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:05:40.093 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:40.093 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:40.093 12:27:05 -- spdk/autotest.sh@117 -- # uname -s 00:05:40.093 12:27:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:40.093 12:27:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:40.093 12:27:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:42.633 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:42.892 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:42.892 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:43.152 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:43.152 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:43.152 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:43.152 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:44.092 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.092 12:27:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:45.031 12:27:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:45.031 12:27:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:45.031 12:27:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:45.031 12:27:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:45.031 12:27:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:45.031 12:27:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:45.031 12:27:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.031 12:27:10 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:45.031 12:27:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:45.031 12:27:11 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:45.031 12:27:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:45.031 12:27:11 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.572 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:47.832 Waiting for block devices as requested 00:05:48.092 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:48.092 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:48.351 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:48.351 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:48.351 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:48.351 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:48.611 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:48.611 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.611 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:48.871 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:48.871 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:48.871 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:48.871 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:49.130 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:49.130 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:49.130 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:49.389 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:49.389 12:27:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.389 12:27:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.389 12:27:15 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:05:49.389 12:27:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:49.389 12:27:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:49.389 12:27:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.389 12:27:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.389 12:27:15 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:49.389 12:27:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.389 12:27:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.389 12:27:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:49.389 12:27:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.389 12:27:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.389 12:27:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.389 12:27:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.389 12:27:15 -- common/autotest_common.sh@1541 -- # continue 00:05:49.389 12:27:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.389 12:27:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.389 12:27:15 -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 12:27:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.389 12:27:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.389 12:27:15 -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 12:27:15 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.927 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:52.497 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:52.497 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:52.756 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:52.756 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:53.325 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:53.586 12:27:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:53.586 12:27:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.586 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.586 12:27:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:53.586 12:27:19 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:53.586 12:27:19 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:53.586 12:27:19 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:53.586 12:27:19 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:53.586 12:27:19 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:53.586 12:27:19 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:53.586 12:27:19 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:53.586 12:27:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:53.586 12:27:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:53.586 12:27:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:53.586 12:27:19 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:53.586 12:27:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:53.586 12:27:19 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:53.586 12:27:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:53.586 12:27:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:53.586 12:27:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:53.586 12:27:19 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:53.586 12:27:19 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:53.586 12:27:19 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:53.586 12:27:19 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:53.586 12:27:19 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:05:53.586 12:27:19 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:05:53.586 12:27:19 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=150044 00:05:53.586 12:27:19 -- common/autotest_common.sh@1583 -- # waitforlisten 150044 00:05:53.586 12:27:19 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.586 12:27:19 -- common/autotest_common.sh@831 -- # '[' -z 150044 ']' 00:05:53.586 12:27:19 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.586 12:27:19 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.586 12:27:19 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.586 12:27:19 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.586 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.845 [2024-12-16 12:27:19.664945] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:53.845 [2024-12-16 12:27:19.664995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150044 ] 00:05:53.845 [2024-12-16 12:27:19.735548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.845 [2024-12-16 12:27:19.774946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.104 12:27:19 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.105 12:27:19 -- common/autotest_common.sh@864 -- # return 0 00:05:54.105 12:27:19 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:54.105 12:27:19 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:54.105 12:27:19 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:57.395 nvme0n1 00:05:57.395 12:27:22 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:57.395 [2024-12-16 12:27:23.168934] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:57.395 [2024-12-16 12:27:23.168965] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:57.395 request: 00:05:57.395 { 00:05:57.395 "nvme_ctrlr_name": "nvme0", 00:05:57.395 "password": "test", 00:05:57.395 "method": "bdev_nvme_opal_revert", 00:05:57.395 "req_id": 1 00:05:57.395 } 00:05:57.395 Got JSON-RPC error response 00:05:57.395 response: 00:05:57.395 { 00:05:57.395 "code": -32603, 00:05:57.395 "message": "Internal error" 00:05:57.395 } 00:05:57.395 12:27:23 -- common/autotest_common.sh@1589 -- # true 00:05:57.395 12:27:23 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:57.395 12:27:23 -- common/autotest_common.sh@1593 -- # killprocess 150044 00:05:57.395 12:27:23 -- common/autotest_common.sh@950 -- # '[' -z 150044 ']' 00:05:57.395 12:27:23 -- common/autotest_common.sh@954 -- # kill -0 150044 00:05:57.395 12:27:23 -- common/autotest_common.sh@955 -- # uname 00:05:57.395 12:27:23 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.395 12:27:23 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 150044 00:05:57.395 12:27:23 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.395 12:27:23 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.395 12:27:23 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 150044' 00:05:57.395 killing process with pid 150044 00:05:57.395 12:27:23 -- common/autotest_common.sh@969 -- # kill 150044 00:05:57.395 12:27:23 -- common/autotest_common.sh@974 -- # wait 150044 00:05:59.299 12:27:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:59.299 12:27:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:59.299 12:27:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:59.299 12:27:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:59.299 12:27:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:59.299 12:27:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.299 12:27:24 -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 12:27:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:59.299 12:27:24 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:59.299 12:27:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.299 12:27:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.299 12:27:24 -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 ************************************ 00:05:59.299 START TEST env 00:05:59.299 ************************************ 00:05:59.299 12:27:24 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:59.299 * Looking for test storage... 00:05:59.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.299 12:27:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.299 12:27:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.299 12:27:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.299 12:27:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.299 12:27:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.299 12:27:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.299 12:27:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.299 12:27:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.299 12:27:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.299 12:27:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.299 12:27:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.299 12:27:25 env -- scripts/common.sh@344 -- # case "$op" in 00:05:59.299 12:27:25 env -- scripts/common.sh@345 -- # : 1 00:05:59.299 12:27:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.299 12:27:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.299 12:27:25 env -- scripts/common.sh@365 -- # decimal 1 00:05:59.299 12:27:25 env -- scripts/common.sh@353 -- # local d=1 00:05:59.299 12:27:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.299 12:27:25 env -- scripts/common.sh@355 -- # echo 1 00:05:59.299 12:27:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.299 12:27:25 env -- scripts/common.sh@366 -- # decimal 2 00:05:59.299 12:27:25 env -- scripts/common.sh@353 -- # local d=2 00:05:59.299 12:27:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.299 12:27:25 env -- scripts/common.sh@355 -- # echo 2 00:05:59.299 12:27:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.299 12:27:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.299 12:27:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.299 12:27:25 env -- scripts/common.sh@368 -- # return 0 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:27:25 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:27:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:59.300 12:27:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.300 12:27:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.300 12:27:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.300 ************************************ 00:05:59.300 START TEST env_memory 00:05:59.300 ************************************ 00:05:59.300 12:27:25 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:59.300 00:05:59.300 00:05:59.300 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.300 http://cunit.sourceforge.net/ 00:05:59.300 00:05:59.300 00:05:59.300 Suite: memory 00:05:59.300 Test: alloc and free memory map ...[2024-12-16 12:27:25.149669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:59.300 passed 00:05:59.300 Test: mem map translation ...[2024-12-16 12:27:25.169122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:59.300 [2024-12-16 12:27:25.169137] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:59.300 [2024-12-16 12:27:25.169173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:59.300 [2024-12-16 12:27:25.169179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:59.300 passed 00:05:59.300 Test: mem map registration ...[2024-12-16 12:27:25.210404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:59.300 [2024-12-16 12:27:25.210420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:59.300 passed 00:05:59.300 Test: mem map adjacent registrations ...passed 00:05:59.300 00:05:59.300 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.300 suites 1 1 n/a 0 0 00:05:59.300 tests 4 4 4 0 0 00:05:59.300 asserts 152 152 152 0 n/a 00:05:59.300 00:05:59.300 Elapsed time = 0.134 seconds 00:05:59.300 00:05:59.300 real 0m0.143s 00:05:59.300 user 0m0.133s 00:05:59.300 sys 0m0.009s 00:05:59.300 12:27:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.300 12:27:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:59.300 ************************************ 00:05:59.300 END TEST env_memory 00:05:59.300 ************************************ 00:05:59.300 12:27:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.300 12:27:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.300 12:27:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.300 12:27:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.300 ************************************ 00:05:59.300 START TEST env_vtophys 00:05:59.300 ************************************ 00:05:59.300 12:27:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.300 EAL: lib.eal log level changed from notice to debug 00:05:59.300 EAL: Detected lcore 0 as core 0 on socket 0 00:05:59.300 EAL: Detected lcore 1 as core 1 on socket 0 00:05:59.300 EAL: Detected lcore 2 as core 2 on socket 0 00:05:59.300 EAL: Detected lcore 3 as core 3 on socket 0 00:05:59.300 EAL: Detected lcore 4 as core 4 on socket 0 00:05:59.300 EAL: Detected lcore 5 as core 5 on socket 0 00:05:59.300 EAL: Detected lcore 6 as core 6 on socket 0 00:05:59.300 EAL: Detected lcore 7 as core 8 on socket 0 00:05:59.300 EAL: Detected lcore 8 as core 9 on socket 0 00:05:59.300 EAL: Detected lcore 9 as core 10 on socket 0 00:05:59.300 EAL: Detected lcore 10 as core 11 on socket 0 00:05:59.300 EAL: Detected lcore 11 as core 12 on socket 0 00:05:59.300 EAL: Detected lcore 12 as core 13 on socket 0 00:05:59.300 EAL: Detected lcore 13 as core 16 on socket 0 00:05:59.300 EAL: Detected lcore 14 as core 17 on socket 0 00:05:59.300 EAL: Detected lcore 15 as core 18 on socket 0 00:05:59.300 EAL: Detected lcore 16 as core 19 on socket 0 00:05:59.300 EAL: Detected lcore 17 as core 20 on socket 0 00:05:59.300 EAL: Detected lcore 18 as core 21 on socket 0 00:05:59.300 EAL: Detected lcore 19 as core 25 on socket 0 00:05:59.300 EAL: Detected lcore 20 as core 26 on socket 0 00:05:59.300 EAL: Detected lcore 21 as core 27 on socket 0 00:05:59.300 EAL: Detected lcore 22 as core 28 on socket 0 00:05:59.300 EAL: Detected lcore 23 as core 29 on socket 0 00:05:59.300 EAL: Detected lcore 24 as core 0 on socket 1 00:05:59.300 EAL: Detected lcore 25 as core 1 on socket 1 00:05:59.300 EAL: Detected lcore 26 as core 2 on socket 1 00:05:59.300 EAL: Detected lcore 27 as core 3 on socket 1 00:05:59.300 EAL: Detected lcore 28 as core 4 on socket 1 00:05:59.300 EAL: Detected lcore 29 as core 5 on socket 1 00:05:59.300 EAL: Detected lcore 30 as core 6 on socket 1 00:05:59.300 EAL: Detected lcore 31 as core 8 on socket 1 00:05:59.300 EAL: Detected lcore 32 as core 9 on socket 1 00:05:59.300 EAL: Detected lcore 33 as core 10 on socket 1 00:05:59.300 EAL: Detected lcore 34 as core 11 on socket 1 00:05:59.300 EAL: Detected lcore 35 as core 12 on socket 1 00:05:59.300 EAL: Detected lcore 36 as core 13 on socket 1 00:05:59.300 EAL: Detected lcore 37 as core 16 on socket 1 00:05:59.300 EAL: Detected lcore 38 as core 17 on socket 1 00:05:59.300 EAL: Detected lcore 39 as core 18 on socket 1 00:05:59.300 EAL: Detected lcore 40 as core 19 on socket 1 00:05:59.300 EAL: Detected lcore 41 as core 20 on socket 1 00:05:59.300 EAL: Detected lcore 42 as core 21 on socket 1 00:05:59.300 EAL: Detected lcore 43 as core 25 on socket 1 00:05:59.300 EAL: Detected lcore 44 as core 26 on socket 1 00:05:59.300 EAL: Detected lcore 45 as core 27 on socket 1 00:05:59.300 EAL: Detected lcore 46 as core 28 on socket 1 00:05:59.300 EAL: Detected lcore 47 as core 29 on socket 1 00:05:59.300 EAL: Detected lcore 48 as core 0 on socket 0 00:05:59.300 EAL: Detected lcore 49 as core 1 on socket 0 00:05:59.300 EAL: Detected lcore 50 as core 2 on socket 0 00:05:59.300 EAL: Detected lcore 51 as core 3 on socket 0 00:05:59.300 EAL: Detected lcore 52 as core 4 on socket 0 00:05:59.300 EAL: Detected lcore 53 as core 5 on socket 0 00:05:59.300 EAL: Detected lcore 54 as core 6 on socket 0 00:05:59.300 EAL: Detected lcore 55 as core 8 on socket 0 00:05:59.300 EAL: Detected lcore 56 as core 9 on socket 0 00:05:59.300 EAL: Detected lcore 57 as core 10 on socket 0 00:05:59.300 EAL: Detected lcore 58 as core 11 on socket 0 00:05:59.300 EAL: Detected lcore 59 as core 12 on socket 0 00:05:59.300 EAL: Detected lcore 60 as core 13 on socket 0 00:05:59.300 EAL: Detected lcore 61 as core 16 on socket 0 00:05:59.300 EAL: Detected lcore 62 as core 17 on socket 0 00:05:59.300 EAL: Detected lcore 63 as core 18 on socket 0 00:05:59.300 EAL: Detected lcore 64 as core 19 on socket 0 00:05:59.300 EAL: Detected lcore 65 as core 20 on socket 0 00:05:59.300 EAL: Detected lcore 66 as core 21 on socket 0 00:05:59.300 EAL: Detected lcore 67 as core 25 on socket 0 00:05:59.300 EAL: Detected lcore 68 as core 26 on socket 0 00:05:59.300 EAL: Detected lcore 69 as core 27 on socket 0 00:05:59.300 EAL: Detected lcore 70 as core 28 on socket 0 00:05:59.300 EAL: Detected lcore 71 as core 29 on socket 0 00:05:59.300 EAL: Detected lcore 72 as core 0 on socket 1 00:05:59.300 EAL: Detected lcore 73 as core 1 on socket 1 00:05:59.300 EAL: Detected lcore 74 as core 2 on socket 1 00:05:59.300 EAL: Detected lcore 75 as core 3 on socket 1 00:05:59.300 EAL: Detected lcore 76 as core 4 on socket 1 00:05:59.300 EAL: Detected lcore 77 as core 5 on socket 1 00:05:59.300 EAL: Detected lcore 78 as core 6 on socket 1 00:05:59.300 EAL: Detected lcore 79 as core 8 on socket 1 00:05:59.300 EAL: Detected lcore 80 as core 9 on socket 1 00:05:59.300 EAL: Detected lcore 81 as core 10 on socket 1 00:05:59.300 EAL: Detected lcore 82 as core 11 on socket 1 00:05:59.300 EAL: Detected lcore 83 as core 12 on socket 1 00:05:59.300 EAL: Detected lcore 84 as core 13 on socket 1 00:05:59.300 EAL: Detected lcore 85 as core 16 on socket 1 00:05:59.300 EAL: Detected lcore 86 as core 17 on socket 1 00:05:59.300 EAL: Detected lcore 87 as core 18 on socket 1 00:05:59.300 EAL: Detected lcore 88 as core 19 on socket 1 00:05:59.300 EAL: Detected lcore 89 as core 20 on socket 1 00:05:59.300 EAL: Detected lcore 90 as core 21 on socket 1 00:05:59.300 EAL: Detected lcore 91 as core 25 on socket 1 00:05:59.300 EAL: Detected lcore 92 as core 26 on socket 1 00:05:59.300 EAL: Detected lcore 93 as core 27 on socket 1 00:05:59.300 EAL: Detected lcore 94 as core 28 on socket 1 00:05:59.300 EAL: Detected lcore 95 as core 29 on socket 1 00:05:59.300 EAL: Maximum logical cores by configuration: 128 00:05:59.300 EAL: Detected CPU lcores: 96 00:05:59.300 EAL: Detected NUMA nodes: 2 00:05:59.300 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:59.300 EAL: Detected shared linkage of DPDK 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:59.300 EAL: Registered [vdev] bus. 00:05:59.300 EAL: bus.vdev log level changed from disabled to notice 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:59.300 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:59.300 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:59.300 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:59.300 EAL: No shared files mode enabled, IPC will be disabled 00:05:59.560 EAL: No shared files mode enabled, IPC is disabled 00:05:59.560 EAL: Bus pci wants IOVA as 'DC' 00:05:59.560 EAL: Bus vdev wants IOVA as 'DC' 00:05:59.560 EAL: Buses did not request a specific IOVA mode. 00:05:59.560 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:59.560 EAL: Selected IOVA mode 'VA' 00:05:59.560 EAL: Probing VFIO support... 00:05:59.560 EAL: IOMMU type 1 (Type 1) is supported 00:05:59.560 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:59.560 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:59.560 EAL: VFIO support initialized 00:05:59.560 EAL: Ask a virtual area of 0x2e000 bytes 00:05:59.560 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:59.560 EAL: Setting up physically contiguous memory... 00:05:59.560 EAL: Setting maximum number of open files to 524288 00:05:59.560 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:59.560 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:59.560 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:59.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.560 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:59.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.560 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:59.560 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:59.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.560 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:59.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.560 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:59.560 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:59.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.560 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:59.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.560 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:59.560 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:59.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.560 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:59.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.560 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:59.560 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:59.560 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:59.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.561 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:59.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.561 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.561 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:59.561 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:59.561 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.561 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:59.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.561 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.561 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:59.561 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:59.561 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.561 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:59.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.561 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.561 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:59.561 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:59.561 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.561 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:59.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.561 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.561 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:59.561 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:59.561 EAL: Hugepages will be freed exactly as allocated. 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: TSC frequency is ~2100000 KHz 00:05:59.561 EAL: Main lcore 0 is ready (tid=7fd19f623a00;cpuset=[0]) 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 0 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 2MB 00:05:59.561 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:59.561 EAL: probe driver: 8086:37d2 net_i40e 00:05:59.561 EAL: Not managed by a supported kernel driver, skipped 00:05:59.561 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:59.561 EAL: probe driver: 8086:37d2 net_i40e 00:05:59.561 EAL: Not managed by a supported kernel driver, skipped 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:59.561 EAL: Mem event callback 'spdk:(nil)' registered 00:05:59.561 00:05:59.561 00:05:59.561 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.561 http://cunit.sourceforge.net/ 00:05:59.561 00:05:59.561 00:05:59.561 Suite: components_suite 00:05:59.561 Test: vtophys_malloc_test ...passed 00:05:59.561 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 4MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 4MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 6MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 6MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 10MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 10MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 18MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 18MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 34MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 34MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 66MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 66MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 130MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was shrunk by 130MB 00:05:59.561 EAL: Trying to obtain current memory policy. 00:05:59.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.561 EAL: Restoring previous memory policy: 4 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.561 EAL: request: mp_malloc_sync 00:05:59.561 EAL: No shared files mode enabled, IPC is disabled 00:05:59.561 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.820 EAL: request: mp_malloc_sync 00:05:59.820 EAL: No shared files mode enabled, IPC is disabled 00:05:59.820 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.820 EAL: Trying to obtain current memory policy. 00:05:59.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.820 EAL: Restoring previous memory policy: 4 00:05:59.821 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.821 EAL: request: mp_malloc_sync 00:05:59.821 EAL: No shared files mode enabled, IPC is disabled 00:05:59.821 EAL: Heap on socket 0 was expanded by 514MB 00:05:59.821 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.080 EAL: request: mp_malloc_sync 00:06:00.080 EAL: No shared files mode enabled, IPC is disabled 00:06:00.080 EAL: Heap on socket 0 was shrunk by 514MB 00:06:00.080 EAL: Trying to obtain current memory policy. 00:06:00.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.080 EAL: Restoring previous memory policy: 4 00:06:00.080 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.080 EAL: request: mp_malloc_sync 00:06:00.080 EAL: No shared files mode enabled, IPC is disabled 00:06:00.080 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.339 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.608 EAL: request: mp_malloc_sync 00:06:00.608 EAL: No shared files mode enabled, IPC is disabled 00:06:00.608 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.608 passed 00:06:00.608 00:06:00.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.608 suites 1 1 n/a 0 0 00:06:00.608 tests 2 2 2 0 0 00:06:00.608 asserts 497 497 497 0 n/a 00:06:00.608 00:06:00.608 Elapsed time = 0.970 seconds 00:06:00.608 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.608 EAL: request: mp_malloc_sync 00:06:00.608 EAL: No shared files mode enabled, IPC is disabled 00:06:00.608 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.608 EAL: No shared files mode enabled, IPC is disabled 00:06:00.608 EAL: No shared files mode enabled, IPC is disabled 00:06:00.608 EAL: No shared files mode enabled, IPC is disabled 00:06:00.608 00:06:00.608 real 0m1.098s 00:06:00.608 user 0m0.635s 00:06:00.608 sys 0m0.432s 00:06:00.608 12:27:26 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.608 12:27:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 ************************************ 00:06:00.608 END TEST env_vtophys 00:06:00.608 ************************************ 00:06:00.608 12:27:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.608 12:27:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.608 12:27:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.608 12:27:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 ************************************ 00:06:00.608 START TEST env_pci 00:06:00.608 ************************************ 00:06:00.608 12:27:26 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.608 00:06:00.608 00:06:00.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.608 http://cunit.sourceforge.net/ 00:06:00.608 00:06:00.608 00:06:00.608 Suite: pci 00:06:00.608 Test: pci_hook ...[2024-12-16 12:27:26.514438] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 151288 has claimed it 00:06:00.608 EAL: Cannot find device (10000:00:01.0) 00:06:00.608 EAL: Failed to attach device on primary process 00:06:00.608 passed 00:06:00.608 00:06:00.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.608 suites 1 1 n/a 0 0 00:06:00.608 tests 1 1 1 0 0 00:06:00.608 asserts 25 25 25 0 n/a 00:06:00.608 00:06:00.608 Elapsed time = 0.026 seconds 00:06:00.608 00:06:00.608 real 0m0.045s 00:06:00.608 user 0m0.014s 00:06:00.608 sys 0m0.031s 00:06:00.608 12:27:26 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.608 12:27:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 ************************************ 00:06:00.608 END TEST env_pci 00:06:00.608 ************************************ 00:06:00.608 12:27:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.608 12:27:26 env -- env/env.sh@15 -- # uname 00:06:00.608 12:27:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.608 12:27:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.608 12:27:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.608 12:27:26 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:00.608 12:27:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.608 12:27:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 ************************************ 00:06:00.608 START TEST env_dpdk_post_init 00:06:00.608 ************************************ 00:06:00.608 12:27:26 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.608 EAL: Detected CPU lcores: 96 00:06:00.608 EAL: Detected NUMA nodes: 2 00:06:00.608 EAL: Detected shared linkage of DPDK 00:06:00.608 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.608 EAL: Selected IOVA mode 'VA' 00:06:00.608 EAL: VFIO support initialized 00:06:00.608 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.868 EAL: Using IOMMU type 1 (Type 1) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:00.868 EAL: Ignore mapping IO port bar(1) 00:06:00.868 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:01.806 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:01.806 EAL: Ignore mapping IO port bar(1) 00:06:01.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:05.098 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:05.098 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:05.098 Starting DPDK initialization... 00:06:05.098 Starting SPDK post initialization... 00:06:05.098 SPDK NVMe probe 00:06:05.098 Attaching to 0000:5e:00.0 00:06:05.098 Attached to 0000:5e:00.0 00:06:05.098 Cleaning up... 00:06:05.098 00:06:05.098 real 0m4.355s 00:06:05.098 user 0m3.258s 00:06:05.098 sys 0m0.174s 00:06:05.098 12:27:30 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.098 12:27:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 END TEST env_dpdk_post_init 00:06:05.098 ************************************ 00:06:05.098 12:27:31 env -- env/env.sh@26 -- # uname 00:06:05.098 12:27:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.098 12:27:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.098 12:27:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.098 12:27:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.098 12:27:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 START TEST env_mem_callbacks 00:06:05.098 ************************************ 00:06:05.098 12:27:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.098 EAL: Detected CPU lcores: 96 00:06:05.098 EAL: Detected NUMA nodes: 2 00:06:05.098 EAL: Detected shared linkage of DPDK 00:06:05.098 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.098 EAL: Selected IOVA mode 'VA' 00:06:05.098 EAL: VFIO support initialized 00:06:05.098 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.098 00:06:05.098 00:06:05.098 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.098 http://cunit.sourceforge.net/ 00:06:05.098 00:06:05.098 00:06:05.098 Suite: memory 00:06:05.098 Test: test ... 00:06:05.098 register 0x200000200000 2097152 00:06:05.098 malloc 3145728 00:06:05.098 register 0x200000400000 4194304 00:06:05.098 buf 0x200000500000 len 3145728 PASSED 00:06:05.098 malloc 64 00:06:05.098 buf 0x2000004fff40 len 64 PASSED 00:06:05.098 malloc 4194304 00:06:05.098 register 0x200000800000 6291456 00:06:05.098 buf 0x200000a00000 len 4194304 PASSED 00:06:05.098 free 0x200000500000 3145728 00:06:05.098 free 0x2000004fff40 64 00:06:05.098 unregister 0x200000400000 4194304 PASSED 00:06:05.098 free 0x200000a00000 4194304 00:06:05.098 unregister 0x200000800000 6291456 PASSED 00:06:05.098 malloc 8388608 00:06:05.098 register 0x200000400000 10485760 00:06:05.098 buf 0x200000600000 len 8388608 PASSED 00:06:05.098 free 0x200000600000 8388608 00:06:05.098 unregister 0x200000400000 10485760 PASSED 00:06:05.098 passed 00:06:05.098 00:06:05.098 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.098 suites 1 1 n/a 0 0 00:06:05.098 tests 1 1 1 0 0 00:06:05.098 asserts 15 15 15 0 n/a 00:06:05.098 00:06:05.098 Elapsed time = 0.007 seconds 00:06:05.098 00:06:05.098 real 0m0.056s 00:06:05.098 user 0m0.014s 00:06:05.098 sys 0m0.042s 00:06:05.098 12:27:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.098 12:27:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 END TEST env_mem_callbacks 00:06:05.098 ************************************ 00:06:05.098 00:06:05.098 real 0m6.232s 00:06:05.098 user 0m4.303s 00:06:05.098 sys 0m1.010s 00:06:05.098 12:27:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.098 12:27:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 END TEST env 00:06:05.098 ************************************ 00:06:05.358 12:27:31 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.358 12:27:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.358 12:27:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.358 12:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:05.358 ************************************ 00:06:05.358 START TEST rpc 00:06:05.358 ************************************ 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.358 * Looking for test storage... 00:06:05.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.358 12:27:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.358 12:27:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.358 12:27:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.358 12:27:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.358 12:27:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.358 12:27:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.358 12:27:31 rpc -- scripts/common.sh@345 -- # : 1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.358 12:27:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.358 12:27:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.358 12:27:31 rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.358 12:27:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.358 12:27:31 rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.358 12:27:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.358 12:27:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.358 12:27:31 rpc -- scripts/common.sh@368 -- # return 0 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.358 --rc genhtml_branch_coverage=1 00:06:05.358 --rc genhtml_function_coverage=1 00:06:05.358 --rc genhtml_legend=1 00:06:05.358 --rc geninfo_all_blocks=1 00:06:05.358 --rc geninfo_unexecuted_blocks=1 00:06:05.358 00:06:05.358 ' 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.358 --rc genhtml_branch_coverage=1 00:06:05.358 --rc genhtml_function_coverage=1 00:06:05.358 --rc genhtml_legend=1 00:06:05.358 --rc geninfo_all_blocks=1 00:06:05.358 --rc geninfo_unexecuted_blocks=1 00:06:05.358 00:06:05.358 ' 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.358 --rc genhtml_branch_coverage=1 00:06:05.358 --rc genhtml_function_coverage=1 00:06:05.358 --rc genhtml_legend=1 00:06:05.358 --rc geninfo_all_blocks=1 00:06:05.358 --rc geninfo_unexecuted_blocks=1 00:06:05.358 00:06:05.358 ' 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.358 --rc genhtml_branch_coverage=1 00:06:05.358 --rc genhtml_function_coverage=1 00:06:05.358 --rc genhtml_legend=1 00:06:05.358 --rc geninfo_all_blocks=1 00:06:05.358 --rc geninfo_unexecuted_blocks=1 00:06:05.358 00:06:05.358 ' 00:06:05.358 12:27:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=152310 00:06:05.358 12:27:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:05.358 12:27:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.358 12:27:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 152310 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 152310 ']' 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.358 12:27:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.619 [2024-12-16 12:27:31.437323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.619 [2024-12-16 12:27:31.437366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152310 ] 00:06:05.619 [2024-12-16 12:27:31.507274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.619 [2024-12-16 12:27:31.547054] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.619 [2024-12-16 12:27:31.547093] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 152310' to capture a snapshot of events at runtime. 00:06:05.619 [2024-12-16 12:27:31.547100] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.619 [2024-12-16 12:27:31.547106] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.619 [2024-12-16 12:27:31.547111] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid152310 for offline analysis/debug. 00:06:05.619 [2024-12-16 12:27:31.547194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.880 12:27:31 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.880 12:27:31 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.880 12:27:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.880 12:27:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.880 12:27:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.880 12:27:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.880 12:27:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.880 12:27:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.880 12:27:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 ************************************ 00:06:05.880 START TEST rpc_integrity 00:06:05.880 ************************************ 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.880 { 00:06:05.880 "name": "Malloc0", 00:06:05.880 "aliases": [ 00:06:05.880 "d67e6725-c2d5-47b5-a747-9ec75a3f2ca1" 00:06:05.880 ], 00:06:05.880 "product_name": "Malloc disk", 00:06:05.880 "block_size": 512, 00:06:05.880 "num_blocks": 16384, 00:06:05.880 "uuid": "d67e6725-c2d5-47b5-a747-9ec75a3f2ca1", 00:06:05.880 "assigned_rate_limits": { 00:06:05.880 "rw_ios_per_sec": 0, 00:06:05.880 "rw_mbytes_per_sec": 0, 00:06:05.880 "r_mbytes_per_sec": 0, 00:06:05.880 "w_mbytes_per_sec": 0 00:06:05.880 }, 00:06:05.880 "claimed": false, 00:06:05.880 "zoned": false, 00:06:05.880 "supported_io_types": { 00:06:05.880 "read": true, 00:06:05.880 "write": true, 00:06:05.880 "unmap": true, 00:06:05.880 "flush": true, 00:06:05.880 "reset": true, 00:06:05.880 "nvme_admin": false, 00:06:05.880 "nvme_io": false, 00:06:05.880 "nvme_io_md": false, 00:06:05.880 "write_zeroes": true, 00:06:05.880 "zcopy": true, 00:06:05.880 "get_zone_info": false, 00:06:05.880 "zone_management": false, 00:06:05.880 "zone_append": false, 00:06:05.880 "compare": false, 00:06:05.880 "compare_and_write": false, 00:06:05.880 "abort": true, 00:06:05.880 "seek_hole": false, 00:06:05.880 "seek_data": false, 00:06:05.880 "copy": true, 00:06:05.880 "nvme_iov_md": false 00:06:05.880 }, 00:06:05.880 "memory_domains": [ 00:06:05.880 { 00:06:05.880 "dma_device_id": "system", 00:06:05.880 "dma_device_type": 1 00:06:05.880 }, 00:06:05.880 { 00:06:05.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.880 "dma_device_type": 2 00:06:05.880 } 00:06:05.880 ], 00:06:05.880 "driver_specific": {} 00:06:05.880 } 00:06:05.880 ]' 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 [2024-12-16 12:27:31.907315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.880 [2024-12-16 12:27:31.907342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.880 [2024-12-16 12:27:31.907354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfe9070 00:06:05.880 [2024-12-16 12:27:31.907361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.880 [2024-12-16 12:27:31.908396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.880 [2024-12-16 12:27:31.908416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.880 Passthru0 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.880 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.880 { 00:06:05.880 "name": "Malloc0", 00:06:05.880 "aliases": [ 00:06:05.880 "d67e6725-c2d5-47b5-a747-9ec75a3f2ca1" 00:06:05.880 ], 00:06:05.880 "product_name": "Malloc disk", 00:06:05.880 "block_size": 512, 00:06:05.880 "num_blocks": 16384, 00:06:05.880 "uuid": "d67e6725-c2d5-47b5-a747-9ec75a3f2ca1", 00:06:05.880 "assigned_rate_limits": { 00:06:05.880 "rw_ios_per_sec": 0, 00:06:05.880 "rw_mbytes_per_sec": 0, 00:06:05.880 "r_mbytes_per_sec": 0, 00:06:05.880 "w_mbytes_per_sec": 0 00:06:05.880 }, 00:06:05.880 "claimed": true, 00:06:05.880 "claim_type": "exclusive_write", 00:06:05.880 "zoned": false, 00:06:05.880 "supported_io_types": { 00:06:05.880 "read": true, 00:06:05.880 "write": true, 00:06:05.880 "unmap": true, 00:06:05.880 "flush": true, 00:06:05.880 "reset": true, 00:06:05.880 "nvme_admin": false, 00:06:05.880 "nvme_io": false, 00:06:05.880 "nvme_io_md": false, 00:06:05.880 "write_zeroes": true, 00:06:05.880 "zcopy": true, 00:06:05.880 "get_zone_info": false, 00:06:05.880 "zone_management": false, 00:06:05.880 "zone_append": false, 00:06:05.880 "compare": false, 00:06:05.880 "compare_and_write": false, 00:06:05.880 "abort": true, 00:06:05.880 "seek_hole": false, 00:06:05.880 "seek_data": false, 00:06:05.880 "copy": true, 00:06:05.880 "nvme_iov_md": false 00:06:05.880 }, 00:06:05.880 "memory_domains": [ 00:06:05.880 { 00:06:05.880 "dma_device_id": "system", 00:06:05.880 "dma_device_type": 1 00:06:05.880 }, 00:06:05.880 { 00:06:05.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.880 "dma_device_type": 2 00:06:05.880 } 00:06:05.880 ], 00:06:05.880 "driver_specific": {} 00:06:05.880 }, 00:06:05.880 { 00:06:05.880 "name": "Passthru0", 00:06:05.880 "aliases": [ 00:06:05.880 "7038b806-b64b-5ad2-bb2d-3997a1589c98" 00:06:05.880 ], 00:06:05.880 "product_name": "passthru", 00:06:05.880 "block_size": 512, 00:06:05.880 "num_blocks": 16384, 00:06:05.880 "uuid": "7038b806-b64b-5ad2-bb2d-3997a1589c98", 00:06:05.880 "assigned_rate_limits": { 00:06:05.880 "rw_ios_per_sec": 0, 00:06:05.880 "rw_mbytes_per_sec": 0, 00:06:05.880 "r_mbytes_per_sec": 0, 00:06:05.880 "w_mbytes_per_sec": 0 00:06:05.880 }, 00:06:05.880 "claimed": false, 00:06:05.880 "zoned": false, 00:06:05.880 "supported_io_types": { 00:06:05.880 "read": true, 00:06:05.880 "write": true, 00:06:05.880 "unmap": true, 00:06:05.880 "flush": true, 00:06:05.880 "reset": true, 00:06:05.880 "nvme_admin": false, 00:06:05.880 "nvme_io": false, 00:06:05.880 "nvme_io_md": false, 00:06:05.880 "write_zeroes": true, 00:06:05.880 "zcopy": true, 00:06:05.880 "get_zone_info": false, 00:06:05.880 "zone_management": false, 00:06:05.880 "zone_append": false, 00:06:05.880 "compare": false, 00:06:05.880 "compare_and_write": false, 00:06:05.880 "abort": true, 00:06:05.880 "seek_hole": false, 00:06:05.880 "seek_data": false, 00:06:05.880 "copy": true, 00:06:05.880 "nvme_iov_md": false 00:06:05.880 }, 00:06:05.880 "memory_domains": [ 00:06:05.880 { 00:06:05.880 "dma_device_id": "system", 00:06:05.880 "dma_device_type": 1 00:06:05.880 }, 00:06:05.880 { 00:06:05.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.881 "dma_device_type": 2 00:06:05.881 } 00:06:05.881 ], 00:06:05.881 "driver_specific": { 00:06:05.881 "passthru": { 00:06:05.881 "name": "Passthru0", 00:06:05.881 "base_bdev_name": "Malloc0" 00:06:05.881 } 00:06:05.881 } 00:06:05.881 } 00:06:05.881 ]' 00:06:05.881 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.140 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.140 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.140 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.140 12:27:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.140 12:27:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.140 12:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.140 12:27:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.140 12:27:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.140 12:27:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.140 00:06:06.140 real 0m0.274s 00:06:06.140 user 0m0.172s 00:06:06.140 sys 0m0.037s 00:06:06.140 12:27:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.140 12:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.140 ************************************ 00:06:06.140 END TEST rpc_integrity 00:06:06.140 ************************************ 00:06:06.140 12:27:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:06.140 12:27:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.140 12:27:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.140 12:27:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.140 ************************************ 00:06:06.140 START TEST rpc_plugins 00:06:06.140 ************************************ 00:06:06.140 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:06.140 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:06.140 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.140 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.140 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.140 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:06.140 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:06.140 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.140 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.141 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.141 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:06.141 { 00:06:06.141 "name": "Malloc1", 00:06:06.141 "aliases": [ 00:06:06.141 "1129fc13-0e06-466e-ba5d-4207f3c94c70" 00:06:06.141 ], 00:06:06.141 "product_name": "Malloc disk", 00:06:06.141 "block_size": 4096, 00:06:06.141 "num_blocks": 256, 00:06:06.141 "uuid": "1129fc13-0e06-466e-ba5d-4207f3c94c70", 00:06:06.141 "assigned_rate_limits": { 00:06:06.141 "rw_ios_per_sec": 0, 00:06:06.141 "rw_mbytes_per_sec": 0, 00:06:06.141 "r_mbytes_per_sec": 0, 00:06:06.141 "w_mbytes_per_sec": 0 00:06:06.141 }, 00:06:06.141 "claimed": false, 00:06:06.141 "zoned": false, 00:06:06.141 "supported_io_types": { 00:06:06.141 "read": true, 00:06:06.141 "write": true, 00:06:06.141 "unmap": true, 00:06:06.141 "flush": true, 00:06:06.141 "reset": true, 00:06:06.141 "nvme_admin": false, 00:06:06.141 "nvme_io": false, 00:06:06.141 "nvme_io_md": false, 00:06:06.141 "write_zeroes": true, 00:06:06.141 "zcopy": true, 00:06:06.141 "get_zone_info": false, 00:06:06.141 "zone_management": false, 00:06:06.141 "zone_append": false, 00:06:06.141 "compare": false, 00:06:06.141 "compare_and_write": false, 00:06:06.141 "abort": true, 00:06:06.141 "seek_hole": false, 00:06:06.141 "seek_data": false, 00:06:06.141 "copy": true, 00:06:06.141 "nvme_iov_md": false 00:06:06.141 }, 00:06:06.141 "memory_domains": [ 00:06:06.141 { 00:06:06.141 "dma_device_id": "system", 00:06:06.141 "dma_device_type": 1 00:06:06.141 }, 00:06:06.141 { 00:06:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.141 "dma_device_type": 2 00:06:06.141 } 00:06:06.141 ], 00:06:06.141 "driver_specific": {} 00:06:06.141 } 00:06:06.141 ]' 00:06:06.141 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:06.141 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:06.141 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:06.141 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.141 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.141 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.400 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:06.400 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.400 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.400 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.400 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:06.400 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:06.400 12:27:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.400 00:06:06.400 real 0m0.146s 00:06:06.400 user 0m0.087s 00:06:06.400 sys 0m0.024s 00:06:06.400 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.400 12:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.400 ************************************ 00:06:06.400 END TEST rpc_plugins 00:06:06.400 ************************************ 00:06:06.400 12:27:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.400 12:27:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.400 12:27:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.400 12:27:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.400 ************************************ 00:06:06.400 START TEST rpc_trace_cmd_test 00:06:06.400 ************************************ 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.400 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:06.400 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid152310", 00:06:06.400 "tpoint_group_mask": "0x8", 00:06:06.400 "iscsi_conn": { 00:06:06.400 "mask": "0x2", 00:06:06.400 "tpoint_mask": "0x0" 00:06:06.400 }, 00:06:06.400 "scsi": { 00:06:06.400 "mask": "0x4", 00:06:06.400 "tpoint_mask": "0x0" 00:06:06.400 }, 00:06:06.400 "bdev": { 00:06:06.400 "mask": "0x8", 00:06:06.400 "tpoint_mask": "0xffffffffffffffff" 00:06:06.400 }, 00:06:06.400 "nvmf_rdma": { 00:06:06.400 "mask": "0x10", 00:06:06.400 "tpoint_mask": "0x0" 00:06:06.400 }, 00:06:06.400 "nvmf_tcp": { 00:06:06.400 "mask": "0x20", 00:06:06.400 "tpoint_mask": "0x0" 00:06:06.400 }, 00:06:06.400 "ftl": { 00:06:06.400 "mask": "0x40", 00:06:06.400 "tpoint_mask": "0x0" 00:06:06.400 }, 00:06:06.400 "blobfs": { 00:06:06.400 "mask": "0x80", 00:06:06.400 "tpoint_mask": "0x0" 00:06:06.400 }, 00:06:06.400 "dsa": { 00:06:06.400 "mask": "0x200", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "thread": { 00:06:06.401 "mask": "0x400", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "nvme_pcie": { 00:06:06.401 "mask": "0x800", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "iaa": { 00:06:06.401 "mask": "0x1000", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "nvme_tcp": { 00:06:06.401 "mask": "0x2000", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "bdev_nvme": { 00:06:06.401 "mask": "0x4000", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "sock": { 00:06:06.401 "mask": "0x8000", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "blob": { 00:06:06.401 "mask": "0x10000", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 }, 00:06:06.401 "bdev_raid": { 00:06:06.401 "mask": "0x20000", 00:06:06.401 "tpoint_mask": "0x0" 00:06:06.401 } 00:06:06.401 }' 00:06:06.401 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:06.401 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:06.401 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:06.401 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:06.401 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:06.660 00:06:06.660 real 0m0.218s 00:06:06.660 user 0m0.185s 00:06:06.660 sys 0m0.024s 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.660 12:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.660 ************************************ 00:06:06.660 END TEST rpc_trace_cmd_test 00:06:06.660 ************************************ 00:06:06.660 12:27:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:06.660 12:27:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:06.660 12:27:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:06.660 12:27:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.660 12:27:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.660 12:27:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.660 ************************************ 00:06:06.660 START TEST rpc_daemon_integrity 00:06:06.660 ************************************ 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.660 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.660 { 00:06:06.660 "name": "Malloc2", 00:06:06.660 "aliases": [ 00:06:06.660 "33c452b1-d3cf-4e67-81a2-b3227456e99d" 00:06:06.660 ], 00:06:06.660 "product_name": "Malloc disk", 00:06:06.660 "block_size": 512, 00:06:06.660 "num_blocks": 16384, 00:06:06.660 "uuid": "33c452b1-d3cf-4e67-81a2-b3227456e99d", 00:06:06.660 "assigned_rate_limits": { 00:06:06.660 "rw_ios_per_sec": 0, 00:06:06.660 "rw_mbytes_per_sec": 0, 00:06:06.660 "r_mbytes_per_sec": 0, 00:06:06.660 "w_mbytes_per_sec": 0 00:06:06.660 }, 00:06:06.660 "claimed": false, 00:06:06.660 "zoned": false, 00:06:06.660 "supported_io_types": { 00:06:06.660 "read": true, 00:06:06.660 "write": true, 00:06:06.660 "unmap": true, 00:06:06.660 "flush": true, 00:06:06.660 "reset": true, 00:06:06.661 "nvme_admin": false, 00:06:06.661 "nvme_io": false, 00:06:06.661 "nvme_io_md": false, 00:06:06.661 "write_zeroes": true, 00:06:06.661 "zcopy": true, 00:06:06.661 "get_zone_info": false, 00:06:06.661 "zone_management": false, 00:06:06.661 "zone_append": false, 00:06:06.661 "compare": false, 00:06:06.661 "compare_and_write": false, 00:06:06.661 "abort": true, 00:06:06.661 "seek_hole": false, 00:06:06.661 "seek_data": false, 00:06:06.661 "copy": true, 00:06:06.661 "nvme_iov_md": false 00:06:06.661 }, 00:06:06.661 "memory_domains": [ 00:06:06.661 { 00:06:06.661 "dma_device_id": "system", 00:06:06.661 "dma_device_type": 1 00:06:06.661 }, 00:06:06.661 { 00:06:06.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.661 "dma_device_type": 2 00:06:06.661 } 00:06:06.661 ], 00:06:06.661 "driver_specific": {} 00:06:06.661 } 00:06:06.661 ]' 00:06:06.661 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.921 [2024-12-16 12:27:32.749617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:06.921 [2024-12-16 12:27:32.749642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.921 [2024-12-16 12:27:32.749657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfe8a50 00:06:06.921 [2024-12-16 12:27:32.749664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.921 [2024-12-16 12:27:32.750595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.921 [2024-12-16 12:27:32.750614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.921 Passthru0 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.921 { 00:06:06.921 "name": "Malloc2", 00:06:06.921 "aliases": [ 00:06:06.921 "33c452b1-d3cf-4e67-81a2-b3227456e99d" 00:06:06.921 ], 00:06:06.921 "product_name": "Malloc disk", 00:06:06.921 "block_size": 512, 00:06:06.921 "num_blocks": 16384, 00:06:06.921 "uuid": "33c452b1-d3cf-4e67-81a2-b3227456e99d", 00:06:06.921 "assigned_rate_limits": { 00:06:06.921 "rw_ios_per_sec": 0, 00:06:06.921 "rw_mbytes_per_sec": 0, 00:06:06.921 "r_mbytes_per_sec": 0, 00:06:06.921 "w_mbytes_per_sec": 0 00:06:06.921 }, 00:06:06.921 "claimed": true, 00:06:06.921 "claim_type": "exclusive_write", 00:06:06.921 "zoned": false, 00:06:06.921 "supported_io_types": { 00:06:06.921 "read": true, 00:06:06.921 "write": true, 00:06:06.921 "unmap": true, 00:06:06.921 "flush": true, 00:06:06.921 "reset": true, 00:06:06.921 "nvme_admin": false, 00:06:06.921 "nvme_io": false, 00:06:06.921 "nvme_io_md": false, 00:06:06.921 "write_zeroes": true, 00:06:06.921 "zcopy": true, 00:06:06.921 "get_zone_info": false, 00:06:06.921 "zone_management": false, 00:06:06.921 "zone_append": false, 00:06:06.921 "compare": false, 00:06:06.921 "compare_and_write": false, 00:06:06.921 "abort": true, 00:06:06.921 "seek_hole": false, 00:06:06.921 "seek_data": false, 00:06:06.921 "copy": true, 00:06:06.921 "nvme_iov_md": false 00:06:06.921 }, 00:06:06.921 "memory_domains": [ 00:06:06.921 { 00:06:06.921 "dma_device_id": "system", 00:06:06.921 "dma_device_type": 1 00:06:06.921 }, 00:06:06.921 { 00:06:06.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.921 "dma_device_type": 2 00:06:06.921 } 00:06:06.921 ], 00:06:06.921 "driver_specific": {} 00:06:06.921 }, 00:06:06.921 { 00:06:06.921 "name": "Passthru0", 00:06:06.921 "aliases": [ 00:06:06.921 "e5408e92-25a9-5cc8-a287-7c336a51461c" 00:06:06.921 ], 00:06:06.921 "product_name": "passthru", 00:06:06.921 "block_size": 512, 00:06:06.921 "num_blocks": 16384, 00:06:06.921 "uuid": "e5408e92-25a9-5cc8-a287-7c336a51461c", 00:06:06.921 "assigned_rate_limits": { 00:06:06.921 "rw_ios_per_sec": 0, 00:06:06.921 "rw_mbytes_per_sec": 0, 00:06:06.921 "r_mbytes_per_sec": 0, 00:06:06.921 "w_mbytes_per_sec": 0 00:06:06.921 }, 00:06:06.921 "claimed": false, 00:06:06.921 "zoned": false, 00:06:06.921 "supported_io_types": { 00:06:06.921 "read": true, 00:06:06.921 "write": true, 00:06:06.921 "unmap": true, 00:06:06.921 "flush": true, 00:06:06.921 "reset": true, 00:06:06.921 "nvme_admin": false, 00:06:06.921 "nvme_io": false, 00:06:06.921 "nvme_io_md": false, 00:06:06.921 "write_zeroes": true, 00:06:06.921 "zcopy": true, 00:06:06.921 "get_zone_info": false, 00:06:06.921 "zone_management": false, 00:06:06.921 "zone_append": false, 00:06:06.921 "compare": false, 00:06:06.921 "compare_and_write": false, 00:06:06.921 "abort": true, 00:06:06.921 "seek_hole": false, 00:06:06.921 "seek_data": false, 00:06:06.921 "copy": true, 00:06:06.921 "nvme_iov_md": false 00:06:06.921 }, 00:06:06.921 "memory_domains": [ 00:06:06.921 { 00:06:06.921 "dma_device_id": "system", 00:06:06.921 "dma_device_type": 1 00:06:06.921 }, 00:06:06.921 { 00:06:06.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.921 "dma_device_type": 2 00:06:06.921 } 00:06:06.921 ], 00:06:06.921 "driver_specific": { 00:06:06.921 "passthru": { 00:06:06.921 "name": "Passthru0", 00:06:06.921 "base_bdev_name": "Malloc2" 00:06:06.921 } 00:06:06.921 } 00:06:06.921 } 00:06:06.921 ]' 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.921 00:06:06.921 real 0m0.276s 00:06:06.921 user 0m0.170s 00:06:06.921 sys 0m0.038s 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.921 12:27:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.921 ************************************ 00:06:06.921 END TEST rpc_daemon_integrity 00:06:06.921 ************************************ 00:06:06.921 12:27:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.921 12:27:32 rpc -- rpc/rpc.sh@84 -- # killprocess 152310 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 152310 ']' 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@954 -- # kill -0 152310 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152310 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152310' 00:06:06.921 killing process with pid 152310 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@969 -- # kill 152310 00:06:06.921 12:27:32 rpc -- common/autotest_common.sh@974 -- # wait 152310 00:06:07.491 00:06:07.491 real 0m2.084s 00:06:07.491 user 0m2.642s 00:06:07.491 sys 0m0.700s 00:06:07.491 12:27:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.491 12:27:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.491 ************************************ 00:06:07.491 END TEST rpc 00:06:07.491 ************************************ 00:06:07.491 12:27:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.491 12:27:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.491 12:27:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.491 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:07.491 ************************************ 00:06:07.491 START TEST skip_rpc 00:06:07.491 ************************************ 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.491 * Looking for test storage... 00:06:07.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.491 12:27:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.491 --rc genhtml_branch_coverage=1 00:06:07.491 --rc genhtml_function_coverage=1 00:06:07.491 --rc genhtml_legend=1 00:06:07.491 --rc geninfo_all_blocks=1 00:06:07.491 --rc geninfo_unexecuted_blocks=1 00:06:07.491 00:06:07.491 ' 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.491 --rc genhtml_branch_coverage=1 00:06:07.491 --rc genhtml_function_coverage=1 00:06:07.491 --rc genhtml_legend=1 00:06:07.491 --rc geninfo_all_blocks=1 00:06:07.491 --rc geninfo_unexecuted_blocks=1 00:06:07.491 00:06:07.491 ' 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.491 --rc genhtml_branch_coverage=1 00:06:07.491 --rc genhtml_function_coverage=1 00:06:07.491 --rc genhtml_legend=1 00:06:07.491 --rc geninfo_all_blocks=1 00:06:07.491 --rc geninfo_unexecuted_blocks=1 00:06:07.491 00:06:07.491 ' 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.491 --rc genhtml_branch_coverage=1 00:06:07.491 --rc genhtml_function_coverage=1 00:06:07.491 --rc genhtml_legend=1 00:06:07.491 --rc geninfo_all_blocks=1 00:06:07.491 --rc geninfo_unexecuted_blocks=1 00:06:07.491 00:06:07.491 ' 00:06:07.491 12:27:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:07.491 12:27:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:07.491 12:27:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.491 12:27:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.778 ************************************ 00:06:07.778 START TEST skip_rpc 00:06:07.778 ************************************ 00:06:07.778 12:27:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:07.778 12:27:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=152890 00:06:07.778 12:27:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.778 12:27:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.778 12:27:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:07.778 [2024-12-16 12:27:33.619592] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:07.778 [2024-12-16 12:27:33.619627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152890 ] 00:06:07.778 [2024-12-16 12:27:33.687265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.778 [2024-12-16 12:27:33.725869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 152890 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 152890 ']' 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 152890 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152890 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152890' 00:06:13.051 killing process with pid 152890 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 152890 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 152890 00:06:13.051 00:06:13.051 real 0m5.377s 00:06:13.051 user 0m5.124s 00:06:13.051 sys 0m0.286s 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.051 12:27:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.051 ************************************ 00:06:13.051 END TEST skip_rpc 00:06:13.051 ************************************ 00:06:13.051 12:27:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:13.051 12:27:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.051 12:27:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.051 12:27:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.051 ************************************ 00:06:13.051 START TEST skip_rpc_with_json 00:06:13.051 ************************************ 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=153816 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 153816 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 153816 ']' 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.051 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.051 [2024-12-16 12:27:39.064220] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.051 [2024-12-16 12:27:39.064259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153816 ] 00:06:13.312 [2024-12-16 12:27:39.130834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.312 [2024-12-16 12:27:39.170312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.312 [2024-12-16 12:27:39.365021] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:13.312 request: 00:06:13.312 { 00:06:13.312 "trtype": "tcp", 00:06:13.312 "method": "nvmf_get_transports", 00:06:13.312 "req_id": 1 00:06:13.312 } 00:06:13.312 Got JSON-RPC error response 00:06:13.312 response: 00:06:13.312 { 00:06:13.312 "code": -19, 00:06:13.312 "message": "No such device" 00:06:13.312 } 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.312 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.312 [2024-12-16 12:27:39.377139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.571 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.571 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:13.571 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.571 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.571 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.571 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.571 { 00:06:13.571 "subsystems": [ 00:06:13.571 { 00:06:13.571 "subsystem": "fsdev", 00:06:13.571 "config": [ 00:06:13.571 { 00:06:13.571 "method": "fsdev_set_opts", 00:06:13.571 "params": { 00:06:13.571 "fsdev_io_pool_size": 65535, 00:06:13.571 "fsdev_io_cache_size": 256 00:06:13.571 } 00:06:13.571 } 00:06:13.571 ] 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "subsystem": "vfio_user_target", 00:06:13.571 "config": null 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "subsystem": "keyring", 00:06:13.571 "config": [] 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "subsystem": "iobuf", 00:06:13.571 "config": [ 00:06:13.571 { 00:06:13.571 "method": "iobuf_set_options", 00:06:13.571 "params": { 00:06:13.571 "small_pool_count": 8192, 00:06:13.571 "large_pool_count": 1024, 00:06:13.571 "small_bufsize": 8192, 00:06:13.571 "large_bufsize": 135168 00:06:13.571 } 00:06:13.571 } 00:06:13.571 ] 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "subsystem": "sock", 00:06:13.571 "config": [ 00:06:13.571 { 00:06:13.571 "method": "sock_set_default_impl", 00:06:13.571 "params": { 00:06:13.571 "impl_name": "posix" 00:06:13.571 } 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "method": "sock_impl_set_options", 00:06:13.571 "params": { 00:06:13.571 "impl_name": "ssl", 00:06:13.571 "recv_buf_size": 4096, 00:06:13.571 "send_buf_size": 4096, 00:06:13.571 "enable_recv_pipe": true, 00:06:13.571 "enable_quickack": false, 00:06:13.571 "enable_placement_id": 0, 00:06:13.571 "enable_zerocopy_send_server": true, 00:06:13.571 "enable_zerocopy_send_client": false, 00:06:13.571 "zerocopy_threshold": 0, 00:06:13.571 "tls_version": 0, 00:06:13.571 "enable_ktls": false 00:06:13.571 } 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "method": "sock_impl_set_options", 00:06:13.571 "params": { 00:06:13.571 "impl_name": "posix", 00:06:13.571 "recv_buf_size": 2097152, 00:06:13.571 "send_buf_size": 2097152, 00:06:13.571 "enable_recv_pipe": true, 00:06:13.571 "enable_quickack": false, 00:06:13.571 "enable_placement_id": 0, 00:06:13.571 "enable_zerocopy_send_server": true, 00:06:13.571 "enable_zerocopy_send_client": false, 00:06:13.571 "zerocopy_threshold": 0, 00:06:13.571 "tls_version": 0, 00:06:13.571 "enable_ktls": false 00:06:13.571 } 00:06:13.571 } 00:06:13.571 ] 00:06:13.571 }, 00:06:13.571 { 00:06:13.571 "subsystem": "vmd", 00:06:13.571 "config": [] 00:06:13.571 }, 00:06:13.572 { 00:06:13.572 "subsystem": "accel", 00:06:13.572 "config": [ 00:06:13.572 { 00:06:13.572 "method": "accel_set_options", 00:06:13.572 "params": { 00:06:13.572 "small_cache_size": 128, 00:06:13.572 "large_cache_size": 16, 00:06:13.572 "task_count": 2048, 00:06:13.572 "sequence_count": 2048, 00:06:13.572 "buf_count": 2048 00:06:13.572 } 00:06:13.572 } 00:06:13.572 ] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "bdev", 00:06:13.572 "config": [ 00:06:13.572 { 00:06:13.572 "method": "bdev_set_options", 00:06:13.572 "params": { 00:06:13.572 "bdev_io_pool_size": 65535, 00:06:13.572 "bdev_io_cache_size": 256, 00:06:13.572 "bdev_auto_examine": true, 00:06:13.572 "iobuf_small_cache_size": 128, 00:06:13.572 "iobuf_large_cache_size": 16 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "bdev_raid_set_options", 00:06:13.572 "params": { 00:06:13.572 "process_window_size_kb": 1024, 00:06:13.572 "process_max_bandwidth_mb_sec": 0 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "bdev_iscsi_set_options", 00:06:13.572 "params": { 00:06:13.572 "timeout_sec": 30 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "bdev_nvme_set_options", 00:06:13.572 "params": { 00:06:13.572 "action_on_timeout": "none", 00:06:13.572 "timeout_us": 0, 00:06:13.572 "timeout_admin_us": 0, 00:06:13.572 "keep_alive_timeout_ms": 10000, 00:06:13.572 "arbitration_burst": 0, 00:06:13.572 "low_priority_weight": 0, 00:06:13.572 "medium_priority_weight": 0, 00:06:13.572 "high_priority_weight": 0, 00:06:13.572 "nvme_adminq_poll_period_us": 10000, 00:06:13.572 "nvme_ioq_poll_period_us": 0, 00:06:13.572 "io_queue_requests": 0, 00:06:13.572 "delay_cmd_submit": true, 00:06:13.572 "transport_retry_count": 4, 00:06:13.572 "bdev_retry_count": 3, 00:06:13.572 "transport_ack_timeout": 0, 00:06:13.572 "ctrlr_loss_timeout_sec": 0, 00:06:13.572 "reconnect_delay_sec": 0, 00:06:13.572 "fast_io_fail_timeout_sec": 0, 00:06:13.572 "disable_auto_failback": false, 00:06:13.572 "generate_uuids": false, 00:06:13.572 "transport_tos": 0, 00:06:13.572 "nvme_error_stat": false, 00:06:13.572 "rdma_srq_size": 0, 00:06:13.572 "io_path_stat": false, 00:06:13.572 "allow_accel_sequence": false, 00:06:13.572 "rdma_max_cq_size": 0, 00:06:13.572 "rdma_cm_event_timeout_ms": 0, 00:06:13.572 "dhchap_digests": [ 00:06:13.572 "sha256", 00:06:13.572 "sha384", 00:06:13.572 "sha512" 00:06:13.572 ], 00:06:13.572 "dhchap_dhgroups": [ 00:06:13.572 "null", 00:06:13.572 "ffdhe2048", 00:06:13.572 "ffdhe3072", 00:06:13.572 "ffdhe4096", 00:06:13.572 "ffdhe6144", 00:06:13.572 "ffdhe8192" 00:06:13.572 ] 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "bdev_nvme_set_hotplug", 00:06:13.572 "params": { 00:06:13.572 "period_us": 100000, 00:06:13.572 "enable": false 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "bdev_wait_for_examine" 00:06:13.572 } 00:06:13.572 ] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "scsi", 00:06:13.572 "config": null 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "scheduler", 00:06:13.572 "config": [ 00:06:13.572 { 00:06:13.572 "method": "framework_set_scheduler", 00:06:13.572 "params": { 00:06:13.572 "name": "static" 00:06:13.572 } 00:06:13.572 } 00:06:13.572 ] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "vhost_scsi", 00:06:13.572 "config": [] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "vhost_blk", 00:06:13.572 "config": [] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "ublk", 00:06:13.572 "config": [] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "nbd", 00:06:13.572 "config": [] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "nvmf", 00:06:13.572 "config": [ 00:06:13.572 { 00:06:13.572 "method": "nvmf_set_config", 00:06:13.572 "params": { 00:06:13.572 "discovery_filter": "match_any", 00:06:13.572 "admin_cmd_passthru": { 00:06:13.572 "identify_ctrlr": false 00:06:13.572 }, 00:06:13.572 "dhchap_digests": [ 00:06:13.572 "sha256", 00:06:13.572 "sha384", 00:06:13.572 "sha512" 00:06:13.572 ], 00:06:13.572 "dhchap_dhgroups": [ 00:06:13.572 "null", 00:06:13.572 "ffdhe2048", 00:06:13.572 "ffdhe3072", 00:06:13.572 "ffdhe4096", 00:06:13.572 "ffdhe6144", 00:06:13.572 "ffdhe8192" 00:06:13.572 ] 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "nvmf_set_max_subsystems", 00:06:13.572 "params": { 00:06:13.572 "max_subsystems": 1024 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "nvmf_set_crdt", 00:06:13.572 "params": { 00:06:13.572 "crdt1": 0, 00:06:13.572 "crdt2": 0, 00:06:13.572 "crdt3": 0 00:06:13.572 } 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "method": "nvmf_create_transport", 00:06:13.572 "params": { 00:06:13.572 "trtype": "TCP", 00:06:13.572 "max_queue_depth": 128, 00:06:13.572 "max_io_qpairs_per_ctrlr": 127, 00:06:13.572 "in_capsule_data_size": 4096, 00:06:13.572 "max_io_size": 131072, 00:06:13.572 "io_unit_size": 131072, 00:06:13.572 "max_aq_depth": 128, 00:06:13.572 "num_shared_buffers": 511, 00:06:13.572 "buf_cache_size": 4294967295, 00:06:13.572 "dif_insert_or_strip": false, 00:06:13.572 "zcopy": false, 00:06:13.572 "c2h_success": true, 00:06:13.572 "sock_priority": 0, 00:06:13.572 "abort_timeout_sec": 1, 00:06:13.572 "ack_timeout": 0, 00:06:13.572 "data_wr_pool_size": 0 00:06:13.572 } 00:06:13.572 } 00:06:13.572 ] 00:06:13.572 }, 00:06:13.572 { 00:06:13.572 "subsystem": "iscsi", 00:06:13.572 "config": [ 00:06:13.572 { 00:06:13.572 "method": "iscsi_set_options", 00:06:13.572 "params": { 00:06:13.572 "node_base": "iqn.2016-06.io.spdk", 00:06:13.572 "max_sessions": 128, 00:06:13.572 "max_connections_per_session": 2, 00:06:13.572 "max_queue_depth": 64, 00:06:13.572 "default_time2wait": 2, 00:06:13.572 "default_time2retain": 20, 00:06:13.572 "first_burst_length": 8192, 00:06:13.572 "immediate_data": true, 00:06:13.572 "allow_duplicated_isid": false, 00:06:13.572 "error_recovery_level": 0, 00:06:13.572 "nop_timeout": 60, 00:06:13.572 "nop_in_interval": 30, 00:06:13.572 "disable_chap": false, 00:06:13.572 "require_chap": false, 00:06:13.572 "mutual_chap": false, 00:06:13.572 "chap_group": 0, 00:06:13.572 "max_large_datain_per_connection": 64, 00:06:13.572 "max_r2t_per_connection": 4, 00:06:13.572 "pdu_pool_size": 36864, 00:06:13.572 "immediate_data_pool_size": 16384, 00:06:13.572 "data_out_pool_size": 2048 00:06:13.572 } 00:06:13.572 } 00:06:13.572 ] 00:06:13.572 } 00:06:13.572 ] 00:06:13.572 } 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 153816 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 153816 ']' 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 153816 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153816 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153816' 00:06:13.572 killing process with pid 153816 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 153816 00:06:13.572 12:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 153816 00:06:14.140 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:14.140 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=153895 00:06:14.140 12:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 153895 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 153895 ']' 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 153895 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153895 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153895' 00:06:19.412 killing process with pid 153895 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 153895 00:06:19.412 12:27:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 153895 00:06:19.412 12:27:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.412 12:27:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.413 00:06:19.413 real 0m6.279s 00:06:19.413 user 0m5.976s 00:06:19.413 sys 0m0.602s 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.413 ************************************ 00:06:19.413 END TEST skip_rpc_with_json 00:06:19.413 ************************************ 00:06:19.413 12:27:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.413 12:27:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.413 12:27:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.413 12:27:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.413 ************************************ 00:06:19.413 START TEST skip_rpc_with_delay 00:06:19.413 ************************************ 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.413 [2024-12-16 12:27:45.412268] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.413 [2024-12-16 12:27:45.412326] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.413 00:06:19.413 real 0m0.069s 00:06:19.413 user 0m0.044s 00:06:19.413 sys 0m0.024s 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.413 12:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.413 ************************************ 00:06:19.413 END TEST skip_rpc_with_delay 00:06:19.413 ************************************ 00:06:19.413 12:27:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.413 12:27:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.413 12:27:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.413 12:27:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.413 12:27:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.413 12:27:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.673 ************************************ 00:06:19.673 START TEST exit_on_failed_rpc_init 00:06:19.673 ************************************ 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=154859 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 154859 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 154859 ']' 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.673 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.673 [2024-12-16 12:27:45.546561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.673 [2024-12-16 12:27:45.546601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154859 ] 00:06:19.673 [2024-12-16 12:27:45.598919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.673 [2024-12-16 12:27:45.640178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.933 12:27:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.933 [2024-12-16 12:27:45.900876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.933 [2024-12-16 12:27:45.900924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155023 ] 00:06:19.933 [2024-12-16 12:27:45.970283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.192 [2024-12-16 12:27:46.009325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.192 [2024-12-16 12:27:46.009382] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:20.192 [2024-12-16 12:27:46.009391] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:20.192 [2024-12-16 12:27:46.009397] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 154859 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 154859 ']' 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 154859 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 154859 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 154859' 00:06:20.193 killing process with pid 154859 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 154859 00:06:20.193 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 154859 00:06:20.452 00:06:20.452 real 0m0.942s 00:06:20.452 user 0m1.027s 00:06:20.452 sys 0m0.399s 00:06:20.452 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.452 12:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 ************************************ 00:06:20.452 END TEST exit_on_failed_rpc_init 00:06:20.452 ************************************ 00:06:20.452 12:27:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.452 00:06:20.452 real 0m13.115s 00:06:20.452 user 0m12.377s 00:06:20.452 sys 0m1.585s 00:06:20.452 12:27:46 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.452 12:27:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 ************************************ 00:06:20.452 END TEST skip_rpc 00:06:20.452 ************************************ 00:06:20.452 12:27:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.452 12:27:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.452 12:27:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.452 12:27:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.712 ************************************ 00:06:20.712 START TEST rpc_client 00:06:20.712 ************************************ 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.712 * Looking for test storage... 00:06:20.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.712 12:27:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.712 --rc genhtml_branch_coverage=1 00:06:20.712 --rc genhtml_function_coverage=1 00:06:20.712 --rc genhtml_legend=1 00:06:20.712 --rc geninfo_all_blocks=1 00:06:20.712 --rc geninfo_unexecuted_blocks=1 00:06:20.712 00:06:20.712 ' 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.712 --rc genhtml_branch_coverage=1 00:06:20.712 --rc genhtml_function_coverage=1 00:06:20.712 --rc genhtml_legend=1 00:06:20.712 --rc geninfo_all_blocks=1 00:06:20.712 --rc geninfo_unexecuted_blocks=1 00:06:20.712 00:06:20.712 ' 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.712 --rc genhtml_branch_coverage=1 00:06:20.712 --rc genhtml_function_coverage=1 00:06:20.712 --rc genhtml_legend=1 00:06:20.712 --rc geninfo_all_blocks=1 00:06:20.712 --rc geninfo_unexecuted_blocks=1 00:06:20.712 00:06:20.712 ' 00:06:20.712 12:27:46 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.712 --rc genhtml_branch_coverage=1 00:06:20.712 --rc genhtml_function_coverage=1 00:06:20.712 --rc genhtml_legend=1 00:06:20.712 --rc geninfo_all_blocks=1 00:06:20.712 --rc geninfo_unexecuted_blocks=1 00:06:20.712 00:06:20.713 ' 00:06:20.713 12:27:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.713 OK 00:06:20.713 12:27:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.713 00:06:20.713 real 0m0.199s 00:06:20.713 user 0m0.123s 00:06:20.713 sys 0m0.089s 00:06:20.713 12:27:46 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.713 12:27:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.713 ************************************ 00:06:20.713 END TEST rpc_client 00:06:20.713 ************************************ 00:06:20.973 12:27:46 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.973 12:27:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.973 12:27:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.973 12:27:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.973 ************************************ 00:06:20.973 START TEST json_config 00:06:20.973 ************************************ 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.973 12:27:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.973 12:27:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.973 12:27:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.973 12:27:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.973 12:27:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.973 12:27:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:20.973 12:27:46 json_config -- scripts/common.sh@345 -- # : 1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.973 12:27:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.973 12:27:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@353 -- # local d=1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.973 12:27:46 json_config -- scripts/common.sh@355 -- # echo 1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.973 12:27:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@353 -- # local d=2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.973 12:27:46 json_config -- scripts/common.sh@355 -- # echo 2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.973 12:27:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.973 12:27:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.973 12:27:46 json_config -- scripts/common.sh@368 -- # return 0 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.973 --rc genhtml_branch_coverage=1 00:06:20.973 --rc genhtml_function_coverage=1 00:06:20.973 --rc genhtml_legend=1 00:06:20.973 --rc geninfo_all_blocks=1 00:06:20.973 --rc geninfo_unexecuted_blocks=1 00:06:20.973 00:06:20.973 ' 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.973 --rc genhtml_branch_coverage=1 00:06:20.973 --rc genhtml_function_coverage=1 00:06:20.973 --rc genhtml_legend=1 00:06:20.973 --rc geninfo_all_blocks=1 00:06:20.973 --rc geninfo_unexecuted_blocks=1 00:06:20.973 00:06:20.973 ' 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.973 --rc genhtml_branch_coverage=1 00:06:20.973 --rc genhtml_function_coverage=1 00:06:20.973 --rc genhtml_legend=1 00:06:20.973 --rc geninfo_all_blocks=1 00:06:20.973 --rc geninfo_unexecuted_blocks=1 00:06:20.973 00:06:20.973 ' 00:06:20.973 12:27:46 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.973 --rc genhtml_branch_coverage=1 00:06:20.973 --rc genhtml_function_coverage=1 00:06:20.973 --rc genhtml_legend=1 00:06:20.973 --rc geninfo_all_blocks=1 00:06:20.973 --rc geninfo_unexecuted_blocks=1 00:06:20.973 00:06:20.973 ' 00:06:20.973 12:27:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.973 12:27:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.973 12:27:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.973 12:27:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.973 12:27:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.973 12:27:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.973 12:27:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.973 12:27:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.974 12:27:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.974 12:27:46 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.974 12:27:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@51 -- # : 0 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.974 12:27:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.974 12:27:46 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.974 12:27:47 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:20.974 INFO: JSON configuration test init 00:06:20.974 12:27:47 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:20.974 12:27:47 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.974 12:27:47 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.974 12:27:47 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.974 12:27:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.974 12:27:47 json_config -- json_config/common.sh@10 -- # shift 00:06:20.974 12:27:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.974 12:27:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.974 12:27:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.974 12:27:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.974 12:27:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.974 12:27:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=155243 00:06:20.974 12:27:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.974 Waiting for target to run... 00:06:20.974 12:27:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.974 12:27:47 json_config -- json_config/common.sh@25 -- # waitforlisten 155243 /var/tmp/spdk_tgt.sock 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 155243 ']' 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.974 12:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.234 [2024-12-16 12:27:47.060947] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.234 [2024-12-16 12:27:47.060995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155243 ] 00:06:21.493 [2024-12-16 12:27:47.519086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.493 [2024-12-16 12:27:47.551194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.061 12:27:47 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.061 12:27:47 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:22.061 12:27:47 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.061 00:06:22.061 12:27:47 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:22.061 12:27:47 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:22.061 12:27:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.061 12:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.061 12:27:47 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:22.061 12:27:47 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:22.061 12:27:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.061 12:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.061 12:27:47 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:22.061 12:27:47 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:22.061 12:27:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:25.353 12:27:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.353 12:27:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:25.353 12:27:51 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:25.354 12:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@54 -- # sort 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:25.354 12:27:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.354 12:27:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:25.354 12:27:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.354 12:27:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:25.354 12:27:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.354 12:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.613 MallocForNvmf0 00:06:25.613 12:27:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.613 12:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.613 MallocForNvmf1 00:06:25.872 12:27:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.872 12:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.872 [2024-12-16 12:27:51.844208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.872 12:27:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.872 12:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.132 12:27:52 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.132 12:27:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.391 12:27:52 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.391 12:27:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.391 12:27:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.391 12:27:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.650 [2024-12-16 12:27:52.626569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.650 12:27:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:26.650 12:27:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.650 12:27:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.650 12:27:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:26.650 12:27:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.650 12:27:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.909 12:27:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:26.909 12:27:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.909 12:27:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.909 MallocBdevForConfigChangeCheck 00:06:26.909 12:27:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:26.909 12:27:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.909 12:27:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.909 12:27:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:26.909 12:27:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.477 12:27:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:27.477 INFO: shutting down applications... 00:06:27.477 12:27:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:27.477 12:27:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:27.477 12:27:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:27.477 12:27:53 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:28.855 Calling clear_iscsi_subsystem 00:06:28.855 Calling clear_nvmf_subsystem 00:06:28.855 Calling clear_nbd_subsystem 00:06:28.855 Calling clear_ublk_subsystem 00:06:28.855 Calling clear_vhost_blk_subsystem 00:06:28.855 Calling clear_vhost_scsi_subsystem 00:06:28.855 Calling clear_bdev_subsystem 00:06:28.855 12:27:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:28.855 12:27:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:28.855 12:27:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:28.855 12:27:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.855 12:27:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:28.855 12:27:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:29.425 12:27:55 json_config -- json_config/json_config.sh@352 -- # break 00:06:29.425 12:27:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:29.425 12:27:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:29.425 12:27:55 json_config -- json_config/common.sh@31 -- # local app=target 00:06:29.425 12:27:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.425 12:27:55 json_config -- json_config/common.sh@35 -- # [[ -n 155243 ]] 00:06:29.425 12:27:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 155243 00:06:29.425 12:27:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.425 12:27:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.425 12:27:55 json_config -- json_config/common.sh@41 -- # kill -0 155243 00:06:29.425 12:27:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.685 12:27:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.685 12:27:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.685 12:27:55 json_config -- json_config/common.sh@41 -- # kill -0 155243 00:06:29.685 12:27:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.685 12:27:55 json_config -- json_config/common.sh@43 -- # break 00:06:29.685 12:27:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.685 12:27:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.685 SPDK target shutdown done 00:06:29.685 12:27:55 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:29.685 INFO: relaunching applications... 00:06:29.685 12:27:55 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.685 12:27:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:29.685 12:27:55 json_config -- json_config/common.sh@10 -- # shift 00:06:29.685 12:27:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.685 12:27:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.685 12:27:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.685 12:27:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.685 12:27:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.685 12:27:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=156923 00:06:29.685 12:27:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.685 Waiting for target to run... 00:06:29.685 12:27:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.685 12:27:55 json_config -- json_config/common.sh@25 -- # waitforlisten 156923 /var/tmp/spdk_tgt.sock 00:06:29.685 12:27:55 json_config -- common/autotest_common.sh@831 -- # '[' -z 156923 ']' 00:06:29.685 12:27:55 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.685 12:27:55 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.685 12:27:55 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.685 12:27:55 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.685 12:27:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.945 [2024-12-16 12:27:55.779987] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.945 [2024-12-16 12:27:55.780039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156923 ] 00:06:30.205 [2024-12-16 12:27:56.068046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.205 [2024-12-16 12:27:56.091063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.494 [2024-12-16 12:27:59.098093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.494 [2024-12-16 12:27:59.130313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.494 12:27:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.494 12:27:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:33.494 12:27:59 json_config -- json_config/common.sh@26 -- # echo '' 00:06:33.494 00:06:33.494 12:27:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:33.494 12:27:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:33.494 INFO: Checking if target configuration is the same... 00:06:33.494 12:27:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.494 12:27:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:33.494 12:27:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.494 + '[' 2 -ne 2 ']' 00:06:33.494 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.494 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.494 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.494 +++ basename /dev/fd/62 00:06:33.494 ++ mktemp /tmp/62.XXX 00:06:33.494 + tmp_file_1=/tmp/62.o8v 00:06:33.494 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.494 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.494 + tmp_file_2=/tmp/spdk_tgt_config.json.fbs 00:06:33.494 + ret=0 00:06:33.494 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.494 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.753 + diff -u /tmp/62.o8v /tmp/spdk_tgt_config.json.fbs 00:06:33.753 + echo 'INFO: JSON config files are the same' 00:06:33.753 INFO: JSON config files are the same 00:06:33.753 + rm /tmp/62.o8v /tmp/spdk_tgt_config.json.fbs 00:06:33.753 + exit 0 00:06:33.753 12:27:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:33.753 12:27:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:33.753 INFO: changing configuration and checking if this can be detected... 00:06:33.753 12:27:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.753 12:27:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.753 12:27:59 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.753 12:27:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:33.753 12:27:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.753 + '[' 2 -ne 2 ']' 00:06:33.753 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.753 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.753 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.753 +++ basename /dev/fd/62 00:06:33.753 ++ mktemp /tmp/62.XXX 00:06:33.753 + tmp_file_1=/tmp/62.jEN 00:06:33.753 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.753 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.753 + tmp_file_2=/tmp/spdk_tgt_config.json.kDS 00:06:33.753 + ret=0 00:06:33.753 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.321 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.321 + diff -u /tmp/62.jEN /tmp/spdk_tgt_config.json.kDS 00:06:34.321 + ret=1 00:06:34.321 + echo '=== Start of file: /tmp/62.jEN ===' 00:06:34.321 + cat /tmp/62.jEN 00:06:34.321 + echo '=== End of file: /tmp/62.jEN ===' 00:06:34.321 + echo '' 00:06:34.321 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kDS ===' 00:06:34.321 + cat /tmp/spdk_tgt_config.json.kDS 00:06:34.321 + echo '=== End of file: /tmp/spdk_tgt_config.json.kDS ===' 00:06:34.321 + echo '' 00:06:34.321 + rm /tmp/62.jEN /tmp/spdk_tgt_config.json.kDS 00:06:34.321 + exit 1 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:34.321 INFO: configuration change detected. 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 156923 ]] 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.321 12:28:00 json_config -- json_config/json_config.sh@330 -- # killprocess 156923 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@950 -- # '[' -z 156923 ']' 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@954 -- # kill -0 156923 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@955 -- # uname 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 156923 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 156923' 00:06:34.321 killing process with pid 156923 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@969 -- # kill 156923 00:06:34.321 12:28:00 json_config -- common/autotest_common.sh@974 -- # wait 156923 00:06:36.225 12:28:01 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.225 12:28:01 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:36.225 12:28:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.225 12:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.225 12:28:01 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:36.225 12:28:01 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:36.225 INFO: Success 00:06:36.225 00:06:36.225 real 0m14.990s 00:06:36.225 user 0m16.117s 00:06:36.225 sys 0m1.899s 00:06:36.225 12:28:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.225 12:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.225 ************************************ 00:06:36.225 END TEST json_config 00:06:36.225 ************************************ 00:06:36.225 12:28:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.225 12:28:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.225 12:28:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.225 12:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:36.225 ************************************ 00:06:36.225 START TEST json_config_extra_key 00:06:36.225 ************************************ 00:06:36.225 12:28:01 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.225 12:28:01 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.225 12:28:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.225 12:28:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.225 12:28:02 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:36.225 12:28:02 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.225 12:28:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.225 --rc genhtml_branch_coverage=1 00:06:36.225 --rc genhtml_function_coverage=1 00:06:36.225 --rc genhtml_legend=1 00:06:36.225 --rc geninfo_all_blocks=1 00:06:36.225 --rc geninfo_unexecuted_blocks=1 00:06:36.225 00:06:36.225 ' 00:06:36.225 12:28:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.225 --rc genhtml_branch_coverage=1 00:06:36.225 --rc genhtml_function_coverage=1 00:06:36.225 --rc genhtml_legend=1 00:06:36.225 --rc geninfo_all_blocks=1 00:06:36.225 --rc geninfo_unexecuted_blocks=1 00:06:36.225 00:06:36.225 ' 00:06:36.225 12:28:02 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.225 --rc genhtml_branch_coverage=1 00:06:36.225 --rc genhtml_function_coverage=1 00:06:36.225 --rc genhtml_legend=1 00:06:36.225 --rc geninfo_all_blocks=1 00:06:36.225 --rc geninfo_unexecuted_blocks=1 00:06:36.225 00:06:36.225 ' 00:06:36.225 12:28:02 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.225 --rc genhtml_branch_coverage=1 00:06:36.225 --rc genhtml_function_coverage=1 00:06:36.225 --rc genhtml_legend=1 00:06:36.225 --rc geninfo_all_blocks=1 00:06:36.225 --rc geninfo_unexecuted_blocks=1 00:06:36.225 00:06:36.225 ' 00:06:36.225 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.225 12:28:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.225 12:28:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 12:28:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 12:28:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 12:28:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:36.225 12:28:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.225 12:28:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.226 12:28:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.226 12:28:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.226 12:28:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.226 12:28:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.226 12:28:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.226 12:28:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:36.226 INFO: launching applications... 00:06:36.226 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=158014 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:36.226 Waiting for target to run... 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 158014 /var/tmp/spdk_tgt.sock 00:06:36.226 12:28:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 158014 ']' 00:06:36.226 12:28:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.226 12:28:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.226 12:28:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.226 12:28:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.226 12:28:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.226 12:28:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.226 [2024-12-16 12:28:02.110924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.226 [2024-12-16 12:28:02.110976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158014 ] 00:06:36.485 [2024-12-16 12:28:02.387047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.485 [2024-12-16 12:28:02.410060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.053 12:28:02 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.053 12:28:02 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:37.053 00:06:37.053 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:37.053 INFO: shutting down applications... 00:06:37.053 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 158014 ]] 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 158014 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 158014 00:06:37.053 12:28:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 158014 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.622 12:28:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.622 SPDK target shutdown done 00:06:37.622 12:28:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.622 Success 00:06:37.622 00:06:37.622 real 0m1.570s 00:06:37.622 user 0m1.369s 00:06:37.622 sys 0m0.392s 00:06:37.622 12:28:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.622 12:28:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.622 ************************************ 00:06:37.622 END TEST json_config_extra_key 00:06:37.622 ************************************ 00:06:37.622 12:28:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.622 12:28:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.622 12:28:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.622 12:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:37.622 ************************************ 00:06:37.622 START TEST alias_rpc 00:06:37.622 ************************************ 00:06:37.622 12:28:03 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.622 * Looking for test storage... 00:06:37.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.622 12:28:03 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.622 12:28:03 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.622 12:28:03 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.622 12:28:03 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.622 12:28:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.882 12:28:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.882 12:28:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.882 12:28:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.882 12:28:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.882 12:28:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.882 12:28:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.882 --rc genhtml_branch_coverage=1 00:06:37.882 --rc genhtml_function_coverage=1 00:06:37.882 --rc genhtml_legend=1 00:06:37.882 --rc geninfo_all_blocks=1 00:06:37.882 --rc geninfo_unexecuted_blocks=1 00:06:37.882 00:06:37.882 ' 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.882 --rc genhtml_branch_coverage=1 00:06:37.882 --rc genhtml_function_coverage=1 00:06:37.882 --rc genhtml_legend=1 00:06:37.882 --rc geninfo_all_blocks=1 00:06:37.882 --rc geninfo_unexecuted_blocks=1 00:06:37.882 00:06:37.882 ' 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.882 --rc genhtml_branch_coverage=1 00:06:37.882 --rc genhtml_function_coverage=1 00:06:37.882 --rc genhtml_legend=1 00:06:37.882 --rc geninfo_all_blocks=1 00:06:37.882 --rc geninfo_unexecuted_blocks=1 00:06:37.882 00:06:37.882 ' 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.882 --rc genhtml_branch_coverage=1 00:06:37.882 --rc genhtml_function_coverage=1 00:06:37.882 --rc genhtml_legend=1 00:06:37.882 --rc geninfo_all_blocks=1 00:06:37.882 --rc geninfo_unexecuted_blocks=1 00:06:37.882 00:06:37.882 ' 00:06:37.882 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.882 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=158456 00:06:37.882 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 158456 00:06:37.882 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 158456 ']' 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.882 12:28:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.882 [2024-12-16 12:28:03.745300] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.882 [2024-12-16 12:28:03.745355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158456 ] 00:06:37.882 [2024-12-16 12:28:03.811356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.882 [2024-12-16 12:28:03.850881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.141 12:28:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.142 12:28:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.142 12:28:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.401 12:28:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 158456 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 158456 ']' 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 158456 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158456 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158456' 00:06:38.401 killing process with pid 158456 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@969 -- # kill 158456 00:06:38.401 12:28:04 alias_rpc -- common/autotest_common.sh@974 -- # wait 158456 00:06:38.660 00:06:38.660 real 0m1.110s 00:06:38.660 user 0m1.133s 00:06:38.660 sys 0m0.406s 00:06:38.660 12:28:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.660 12:28:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.660 ************************************ 00:06:38.660 END TEST alias_rpc 00:06:38.660 ************************************ 00:06:38.660 12:28:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:38.660 12:28:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.660 12:28:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.660 12:28:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.660 12:28:04 -- common/autotest_common.sh@10 -- # set +x 00:06:38.660 ************************************ 00:06:38.660 START TEST spdkcli_tcp 00:06:38.660 ************************************ 00:06:38.660 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.920 * Looking for test storage... 00:06:38.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.920 12:28:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.920 --rc genhtml_branch_coverage=1 00:06:38.920 --rc genhtml_function_coverage=1 00:06:38.920 --rc genhtml_legend=1 00:06:38.920 --rc geninfo_all_blocks=1 00:06:38.920 --rc geninfo_unexecuted_blocks=1 00:06:38.920 00:06:38.920 ' 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.920 --rc genhtml_branch_coverage=1 00:06:38.920 --rc genhtml_function_coverage=1 00:06:38.920 --rc genhtml_legend=1 00:06:38.920 --rc geninfo_all_blocks=1 00:06:38.920 --rc geninfo_unexecuted_blocks=1 00:06:38.920 00:06:38.920 ' 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.920 --rc genhtml_branch_coverage=1 00:06:38.920 --rc genhtml_function_coverage=1 00:06:38.920 --rc genhtml_legend=1 00:06:38.920 --rc geninfo_all_blocks=1 00:06:38.920 --rc geninfo_unexecuted_blocks=1 00:06:38.920 00:06:38.920 ' 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.920 --rc genhtml_branch_coverage=1 00:06:38.920 --rc genhtml_function_coverage=1 00:06:38.920 --rc genhtml_legend=1 00:06:38.920 --rc geninfo_all_blocks=1 00:06:38.920 --rc geninfo_unexecuted_blocks=1 00:06:38.920 00:06:38.920 ' 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.920 12:28:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=158629 00:06:38.920 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:38.921 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 158629 00:06:38.921 12:28:04 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 158629 ']' 00:06:38.921 12:28:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.921 12:28:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.921 12:28:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.921 12:28:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.921 12:28:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.921 [2024-12-16 12:28:04.932508] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.921 [2024-12-16 12:28:04.932559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158629 ] 00:06:39.182 [2024-12-16 12:28:05.002587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.182 [2024-12-16 12:28:05.042983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.182 [2024-12-16 12:28:05.042991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.182 12:28:05 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.182 12:28:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:39.182 12:28:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=158761 00:06:39.182 12:28:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:39.182 12:28:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.441 [ 00:06:39.441 "bdev_malloc_delete", 00:06:39.441 "bdev_malloc_create", 00:06:39.441 "bdev_null_resize", 00:06:39.441 "bdev_null_delete", 00:06:39.441 "bdev_null_create", 00:06:39.441 "bdev_nvme_cuse_unregister", 00:06:39.441 "bdev_nvme_cuse_register", 00:06:39.441 "bdev_opal_new_user", 00:06:39.441 "bdev_opal_set_lock_state", 00:06:39.441 "bdev_opal_delete", 00:06:39.442 "bdev_opal_get_info", 00:06:39.442 "bdev_opal_create", 00:06:39.442 "bdev_nvme_opal_revert", 00:06:39.442 "bdev_nvme_opal_init", 00:06:39.442 "bdev_nvme_send_cmd", 00:06:39.442 "bdev_nvme_set_keys", 00:06:39.442 "bdev_nvme_get_path_iostat", 00:06:39.442 "bdev_nvme_get_mdns_discovery_info", 00:06:39.442 "bdev_nvme_stop_mdns_discovery", 00:06:39.442 "bdev_nvme_start_mdns_discovery", 00:06:39.442 "bdev_nvme_set_multipath_policy", 00:06:39.442 "bdev_nvme_set_preferred_path", 00:06:39.442 "bdev_nvme_get_io_paths", 00:06:39.442 "bdev_nvme_remove_error_injection", 00:06:39.442 "bdev_nvme_add_error_injection", 00:06:39.442 "bdev_nvme_get_discovery_info", 00:06:39.442 "bdev_nvme_stop_discovery", 00:06:39.442 "bdev_nvme_start_discovery", 00:06:39.442 "bdev_nvme_get_controller_health_info", 00:06:39.442 "bdev_nvme_disable_controller", 00:06:39.442 "bdev_nvme_enable_controller", 00:06:39.442 "bdev_nvme_reset_controller", 00:06:39.442 "bdev_nvme_get_transport_statistics", 00:06:39.442 "bdev_nvme_apply_firmware", 00:06:39.442 "bdev_nvme_detach_controller", 00:06:39.442 "bdev_nvme_get_controllers", 00:06:39.442 "bdev_nvme_attach_controller", 00:06:39.442 "bdev_nvme_set_hotplug", 00:06:39.442 "bdev_nvme_set_options", 00:06:39.442 "bdev_passthru_delete", 00:06:39.442 "bdev_passthru_create", 00:06:39.442 "bdev_lvol_set_parent_bdev", 00:06:39.442 "bdev_lvol_set_parent", 00:06:39.442 "bdev_lvol_check_shallow_copy", 00:06:39.442 "bdev_lvol_start_shallow_copy", 00:06:39.442 "bdev_lvol_grow_lvstore", 00:06:39.442 "bdev_lvol_get_lvols", 00:06:39.442 "bdev_lvol_get_lvstores", 00:06:39.442 "bdev_lvol_delete", 00:06:39.442 "bdev_lvol_set_read_only", 00:06:39.442 "bdev_lvol_resize", 00:06:39.442 "bdev_lvol_decouple_parent", 00:06:39.442 "bdev_lvol_inflate", 00:06:39.442 "bdev_lvol_rename", 00:06:39.442 "bdev_lvol_clone_bdev", 00:06:39.442 "bdev_lvol_clone", 00:06:39.442 "bdev_lvol_snapshot", 00:06:39.442 "bdev_lvol_create", 00:06:39.442 "bdev_lvol_delete_lvstore", 00:06:39.442 "bdev_lvol_rename_lvstore", 00:06:39.442 "bdev_lvol_create_lvstore", 00:06:39.442 "bdev_raid_set_options", 00:06:39.442 "bdev_raid_remove_base_bdev", 00:06:39.442 "bdev_raid_add_base_bdev", 00:06:39.442 "bdev_raid_delete", 00:06:39.442 "bdev_raid_create", 00:06:39.442 "bdev_raid_get_bdevs", 00:06:39.442 "bdev_error_inject_error", 00:06:39.442 "bdev_error_delete", 00:06:39.442 "bdev_error_create", 00:06:39.442 "bdev_split_delete", 00:06:39.442 "bdev_split_create", 00:06:39.442 "bdev_delay_delete", 00:06:39.442 "bdev_delay_create", 00:06:39.442 "bdev_delay_update_latency", 00:06:39.442 "bdev_zone_block_delete", 00:06:39.442 "bdev_zone_block_create", 00:06:39.442 "blobfs_create", 00:06:39.442 "blobfs_detect", 00:06:39.442 "blobfs_set_cache_size", 00:06:39.442 "bdev_aio_delete", 00:06:39.442 "bdev_aio_rescan", 00:06:39.442 "bdev_aio_create", 00:06:39.442 "bdev_ftl_set_property", 00:06:39.442 "bdev_ftl_get_properties", 00:06:39.442 "bdev_ftl_get_stats", 00:06:39.442 "bdev_ftl_unmap", 00:06:39.442 "bdev_ftl_unload", 00:06:39.442 "bdev_ftl_delete", 00:06:39.442 "bdev_ftl_load", 00:06:39.442 "bdev_ftl_create", 00:06:39.442 "bdev_virtio_attach_controller", 00:06:39.442 "bdev_virtio_scsi_get_devices", 00:06:39.442 "bdev_virtio_detach_controller", 00:06:39.442 "bdev_virtio_blk_set_hotplug", 00:06:39.442 "bdev_iscsi_delete", 00:06:39.442 "bdev_iscsi_create", 00:06:39.442 "bdev_iscsi_set_options", 00:06:39.442 "accel_error_inject_error", 00:06:39.442 "ioat_scan_accel_module", 00:06:39.442 "dsa_scan_accel_module", 00:06:39.442 "iaa_scan_accel_module", 00:06:39.442 "vfu_virtio_create_fs_endpoint", 00:06:39.442 "vfu_virtio_create_scsi_endpoint", 00:06:39.442 "vfu_virtio_scsi_remove_target", 00:06:39.442 "vfu_virtio_scsi_add_target", 00:06:39.442 "vfu_virtio_create_blk_endpoint", 00:06:39.442 "vfu_virtio_delete_endpoint", 00:06:39.442 "keyring_file_remove_key", 00:06:39.442 "keyring_file_add_key", 00:06:39.442 "keyring_linux_set_options", 00:06:39.442 "fsdev_aio_delete", 00:06:39.442 "fsdev_aio_create", 00:06:39.442 "iscsi_get_histogram", 00:06:39.442 "iscsi_enable_histogram", 00:06:39.442 "iscsi_set_options", 00:06:39.442 "iscsi_get_auth_groups", 00:06:39.442 "iscsi_auth_group_remove_secret", 00:06:39.442 "iscsi_auth_group_add_secret", 00:06:39.442 "iscsi_delete_auth_group", 00:06:39.442 "iscsi_create_auth_group", 00:06:39.442 "iscsi_set_discovery_auth", 00:06:39.442 "iscsi_get_options", 00:06:39.442 "iscsi_target_node_request_logout", 00:06:39.442 "iscsi_target_node_set_redirect", 00:06:39.442 "iscsi_target_node_set_auth", 00:06:39.442 "iscsi_target_node_add_lun", 00:06:39.442 "iscsi_get_stats", 00:06:39.442 "iscsi_get_connections", 00:06:39.442 "iscsi_portal_group_set_auth", 00:06:39.442 "iscsi_start_portal_group", 00:06:39.442 "iscsi_delete_portal_group", 00:06:39.442 "iscsi_create_portal_group", 00:06:39.442 "iscsi_get_portal_groups", 00:06:39.442 "iscsi_delete_target_node", 00:06:39.442 "iscsi_target_node_remove_pg_ig_maps", 00:06:39.442 "iscsi_target_node_add_pg_ig_maps", 00:06:39.442 "iscsi_create_target_node", 00:06:39.442 "iscsi_get_target_nodes", 00:06:39.442 "iscsi_delete_initiator_group", 00:06:39.442 "iscsi_initiator_group_remove_initiators", 00:06:39.442 "iscsi_initiator_group_add_initiators", 00:06:39.442 "iscsi_create_initiator_group", 00:06:39.442 "iscsi_get_initiator_groups", 00:06:39.442 "nvmf_set_crdt", 00:06:39.442 "nvmf_set_config", 00:06:39.442 "nvmf_set_max_subsystems", 00:06:39.442 "nvmf_stop_mdns_prr", 00:06:39.442 "nvmf_publish_mdns_prr", 00:06:39.442 "nvmf_subsystem_get_listeners", 00:06:39.442 "nvmf_subsystem_get_qpairs", 00:06:39.442 "nvmf_subsystem_get_controllers", 00:06:39.442 "nvmf_get_stats", 00:06:39.442 "nvmf_get_transports", 00:06:39.442 "nvmf_create_transport", 00:06:39.442 "nvmf_get_targets", 00:06:39.442 "nvmf_delete_target", 00:06:39.442 "nvmf_create_target", 00:06:39.442 "nvmf_subsystem_allow_any_host", 00:06:39.442 "nvmf_subsystem_set_keys", 00:06:39.442 "nvmf_subsystem_remove_host", 00:06:39.442 "nvmf_subsystem_add_host", 00:06:39.442 "nvmf_ns_remove_host", 00:06:39.442 "nvmf_ns_add_host", 00:06:39.442 "nvmf_subsystem_remove_ns", 00:06:39.442 "nvmf_subsystem_set_ns_ana_group", 00:06:39.442 "nvmf_subsystem_add_ns", 00:06:39.442 "nvmf_subsystem_listener_set_ana_state", 00:06:39.442 "nvmf_discovery_get_referrals", 00:06:39.442 "nvmf_discovery_remove_referral", 00:06:39.442 "nvmf_discovery_add_referral", 00:06:39.442 "nvmf_subsystem_remove_listener", 00:06:39.442 "nvmf_subsystem_add_listener", 00:06:39.442 "nvmf_delete_subsystem", 00:06:39.442 "nvmf_create_subsystem", 00:06:39.442 "nvmf_get_subsystems", 00:06:39.442 "env_dpdk_get_mem_stats", 00:06:39.442 "nbd_get_disks", 00:06:39.442 "nbd_stop_disk", 00:06:39.442 "nbd_start_disk", 00:06:39.442 "ublk_recover_disk", 00:06:39.442 "ublk_get_disks", 00:06:39.442 "ublk_stop_disk", 00:06:39.442 "ublk_start_disk", 00:06:39.442 "ublk_destroy_target", 00:06:39.442 "ublk_create_target", 00:06:39.442 "virtio_blk_create_transport", 00:06:39.442 "virtio_blk_get_transports", 00:06:39.442 "vhost_controller_set_coalescing", 00:06:39.442 "vhost_get_controllers", 00:06:39.442 "vhost_delete_controller", 00:06:39.442 "vhost_create_blk_controller", 00:06:39.442 "vhost_scsi_controller_remove_target", 00:06:39.442 "vhost_scsi_controller_add_target", 00:06:39.442 "vhost_start_scsi_controller", 00:06:39.442 "vhost_create_scsi_controller", 00:06:39.442 "thread_set_cpumask", 00:06:39.442 "scheduler_set_options", 00:06:39.442 "framework_get_governor", 00:06:39.442 "framework_get_scheduler", 00:06:39.442 "framework_set_scheduler", 00:06:39.442 "framework_get_reactors", 00:06:39.442 "thread_get_io_channels", 00:06:39.442 "thread_get_pollers", 00:06:39.442 "thread_get_stats", 00:06:39.442 "framework_monitor_context_switch", 00:06:39.442 "spdk_kill_instance", 00:06:39.442 "log_enable_timestamps", 00:06:39.442 "log_get_flags", 00:06:39.442 "log_clear_flag", 00:06:39.442 "log_set_flag", 00:06:39.442 "log_get_level", 00:06:39.442 "log_set_level", 00:06:39.442 "log_get_print_level", 00:06:39.442 "log_set_print_level", 00:06:39.442 "framework_enable_cpumask_locks", 00:06:39.442 "framework_disable_cpumask_locks", 00:06:39.442 "framework_wait_init", 00:06:39.442 "framework_start_init", 00:06:39.442 "scsi_get_devices", 00:06:39.442 "bdev_get_histogram", 00:06:39.442 "bdev_enable_histogram", 00:06:39.442 "bdev_set_qos_limit", 00:06:39.442 "bdev_set_qd_sampling_period", 00:06:39.442 "bdev_get_bdevs", 00:06:39.442 "bdev_reset_iostat", 00:06:39.442 "bdev_get_iostat", 00:06:39.442 "bdev_examine", 00:06:39.442 "bdev_wait_for_examine", 00:06:39.442 "bdev_set_options", 00:06:39.442 "accel_get_stats", 00:06:39.442 "accel_set_options", 00:06:39.442 "accel_set_driver", 00:06:39.442 "accel_crypto_key_destroy", 00:06:39.442 "accel_crypto_keys_get", 00:06:39.442 "accel_crypto_key_create", 00:06:39.442 "accel_assign_opc", 00:06:39.442 "accel_get_module_info", 00:06:39.442 "accel_get_opc_assignments", 00:06:39.442 "vmd_rescan", 00:06:39.442 "vmd_remove_device", 00:06:39.442 "vmd_enable", 00:06:39.442 "sock_get_default_impl", 00:06:39.442 "sock_set_default_impl", 00:06:39.442 "sock_impl_set_options", 00:06:39.442 "sock_impl_get_options", 00:06:39.442 "iobuf_get_stats", 00:06:39.442 "iobuf_set_options", 00:06:39.442 "keyring_get_keys", 00:06:39.442 "vfu_tgt_set_base_path", 00:06:39.442 "framework_get_pci_devices", 00:06:39.442 "framework_get_config", 00:06:39.442 "framework_get_subsystems", 00:06:39.442 "fsdev_set_opts", 00:06:39.442 "fsdev_get_opts", 00:06:39.443 "trace_get_info", 00:06:39.443 "trace_get_tpoint_group_mask", 00:06:39.443 "trace_disable_tpoint_group", 00:06:39.443 "trace_enable_tpoint_group", 00:06:39.443 "trace_clear_tpoint_mask", 00:06:39.443 "trace_set_tpoint_mask", 00:06:39.443 "notify_get_notifications", 00:06:39.443 "notify_get_types", 00:06:39.443 "spdk_get_version", 00:06:39.443 "rpc_get_methods" 00:06:39.443 ] 00:06:39.443 12:28:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.443 12:28:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:39.443 12:28:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 158629 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 158629 ']' 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 158629 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.443 12:28:05 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158629 00:06:39.701 12:28:05 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.701 12:28:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.701 12:28:05 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158629' 00:06:39.701 killing process with pid 158629 00:06:39.701 12:28:05 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 158629 00:06:39.701 12:28:05 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 158629 00:06:39.960 00:06:39.960 real 0m1.129s 00:06:39.960 user 0m1.857s 00:06:39.960 sys 0m0.456s 00:06:39.960 12:28:05 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.960 12:28:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.960 ************************************ 00:06:39.960 END TEST spdkcli_tcp 00:06:39.960 ************************************ 00:06:39.960 12:28:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.960 12:28:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.960 12:28:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.960 12:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:39.960 ************************************ 00:06:39.960 START TEST dpdk_mem_utility 00:06:39.960 ************************************ 00:06:39.960 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.960 * Looking for test storage... 00:06:39.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:39.960 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.960 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.960 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.220 12:28:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.220 --rc genhtml_branch_coverage=1 00:06:40.220 --rc genhtml_function_coverage=1 00:06:40.220 --rc genhtml_legend=1 00:06:40.220 --rc geninfo_all_blocks=1 00:06:40.220 --rc geninfo_unexecuted_blocks=1 00:06:40.220 00:06:40.220 ' 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.220 --rc genhtml_branch_coverage=1 00:06:40.220 --rc genhtml_function_coverage=1 00:06:40.220 --rc genhtml_legend=1 00:06:40.220 --rc geninfo_all_blocks=1 00:06:40.220 --rc geninfo_unexecuted_blocks=1 00:06:40.220 00:06:40.220 ' 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.220 --rc genhtml_branch_coverage=1 00:06:40.220 --rc genhtml_function_coverage=1 00:06:40.220 --rc genhtml_legend=1 00:06:40.220 --rc geninfo_all_blocks=1 00:06:40.220 --rc geninfo_unexecuted_blocks=1 00:06:40.220 00:06:40.220 ' 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.220 --rc genhtml_branch_coverage=1 00:06:40.220 --rc genhtml_function_coverage=1 00:06:40.220 --rc genhtml_legend=1 00:06:40.220 --rc geninfo_all_blocks=1 00:06:40.220 --rc geninfo_unexecuted_blocks=1 00:06:40.220 00:06:40.220 ' 00:06:40.220 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.220 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=158848 00:06:40.220 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 158848 00:06:40.220 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 158848 ']' 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.220 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.220 [2024-12-16 12:28:06.117515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.220 [2024-12-16 12:28:06.117564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158848 ] 00:06:40.220 [2024-12-16 12:28:06.187579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.220 [2024-12-16 12:28:06.226125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.480 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.480 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:40.480 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:40.480 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:40.480 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.480 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.480 { 00:06:40.480 "filename": "/tmp/spdk_mem_dump.txt" 00:06:40.480 } 00:06:40.480 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.480 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.480 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:40.480 1 heaps totaling size 860.000000 MiB 00:06:40.480 size: 860.000000 MiB heap id: 0 00:06:40.480 end heaps---------- 00:06:40.480 9 mempools totaling size 642.649841 MiB 00:06:40.480 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:40.480 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:40.480 size: 92.545471 MiB name: bdev_io_158848 00:06:40.480 size: 51.011292 MiB name: evtpool_158848 00:06:40.480 size: 50.003479 MiB name: msgpool_158848 00:06:40.480 size: 36.509338 MiB name: fsdev_io_158848 00:06:40.480 size: 21.763794 MiB name: PDU_Pool 00:06:40.480 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:40.480 size: 0.026123 MiB name: Session_Pool 00:06:40.480 end mempools------- 00:06:40.480 6 memzones totaling size 4.142822 MiB 00:06:40.480 size: 1.000366 MiB name: RG_ring_0_158848 00:06:40.480 size: 1.000366 MiB name: RG_ring_1_158848 00:06:40.480 size: 1.000366 MiB name: RG_ring_4_158848 00:06:40.480 size: 1.000366 MiB name: RG_ring_5_158848 00:06:40.480 size: 0.125366 MiB name: RG_ring_2_158848 00:06:40.480 size: 0.015991 MiB name: RG_ring_3_158848 00:06:40.480 end memzones------- 00:06:40.480 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:40.480 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:40.480 list of free elements. size: 13.984680 MiB 00:06:40.480 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:40.480 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:40.480 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:40.480 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:40.480 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:40.480 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:40.480 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:40.480 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:40.480 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:40.480 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:40.480 element at address: 0x200003e00000 with size: 0.495605 MiB 00:06:40.480 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:40.480 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:40.480 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:40.480 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:40.480 element at address: 0x200003a00000 with size: 0.354858 MiB 00:06:40.480 list of standard malloc elements. size: 199.218628 MiB 00:06:40.480 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:40.480 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:40.480 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:40.480 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:40.480 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:40.480 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:40.480 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:40.480 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:40.480 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:40.480 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:40.480 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:40.480 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:40.480 list of memzone associated elements. size: 646.796692 MiB 00:06:40.480 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:40.480 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:40.480 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:40.480 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:40.480 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:40.480 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_158848_0 00:06:40.480 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:40.480 associated memzone info: size: 48.002930 MiB name: MP_evtpool_158848_0 00:06:40.480 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:40.480 associated memzone info: size: 48.002930 MiB name: MP_msgpool_158848_0 00:06:40.480 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:40.480 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_158848_0 00:06:40.480 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:40.480 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:40.480 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:40.480 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:40.481 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:40.481 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_158848 00:06:40.481 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:40.481 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_158848 00:06:40.481 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:40.481 associated memzone info: size: 1.007996 MiB name: MP_evtpool_158848 00:06:40.481 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:40.481 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:40.481 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:40.481 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:40.481 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:40.481 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:40.481 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:40.481 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:40.481 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:40.481 associated memzone info: size: 1.000366 MiB name: RG_ring_0_158848 00:06:40.481 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:40.481 associated memzone info: size: 1.000366 MiB name: RG_ring_1_158848 00:06:40.481 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:40.481 associated memzone info: size: 1.000366 MiB name: RG_ring_4_158848 00:06:40.481 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:40.481 associated memzone info: size: 1.000366 MiB name: RG_ring_5_158848 00:06:40.481 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:40.481 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_158848 00:06:40.481 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:40.481 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_158848 00:06:40.481 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:40.481 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:40.481 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:40.481 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:40.481 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:40.481 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:40.481 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:06:40.481 associated memzone info: size: 0.125366 MiB name: RG_ring_2_158848 00:06:40.481 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:40.481 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:40.481 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:40.481 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:40.481 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:06:40.481 associated memzone info: size: 0.015991 MiB name: RG_ring_3_158848 00:06:40.481 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:40.481 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:40.481 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:40.481 associated memzone info: size: 0.000183 MiB name: MP_msgpool_158848 00:06:40.481 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:40.481 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_158848 00:06:40.481 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:06:40.481 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_158848 00:06:40.481 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:40.481 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:40.481 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:40.481 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 158848 00:06:40.481 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 158848 ']' 00:06:40.481 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 158848 00:06:40.481 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:40.481 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.481 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158848 00:06:40.740 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.740 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.740 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158848' 00:06:40.740 killing process with pid 158848 00:06:40.740 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 158848 00:06:40.740 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 158848 00:06:40.999 00:06:40.999 real 0m0.992s 00:06:40.999 user 0m0.933s 00:06:40.999 sys 0m0.400s 00:06:40.999 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.999 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.999 ************************************ 00:06:40.999 END TEST dpdk_mem_utility 00:06:40.999 ************************************ 00:06:41.000 12:28:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.000 12:28:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.000 12:28:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.000 12:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.000 ************************************ 00:06:41.000 START TEST event 00:06:41.000 ************************************ 00:06:41.000 12:28:06 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.000 * Looking for test storage... 00:06:41.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.000 12:28:07 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.000 12:28:07 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.000 12:28:07 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.259 12:28:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.259 12:28:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.259 12:28:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.259 12:28:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.259 12:28:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.259 12:28:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.259 12:28:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.259 12:28:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.259 12:28:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.259 12:28:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.259 12:28:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.259 12:28:07 event -- scripts/common.sh@344 -- # case "$op" in 00:06:41.259 12:28:07 event -- scripts/common.sh@345 -- # : 1 00:06:41.259 12:28:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.259 12:28:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.259 12:28:07 event -- scripts/common.sh@365 -- # decimal 1 00:06:41.259 12:28:07 event -- scripts/common.sh@353 -- # local d=1 00:06:41.259 12:28:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.259 12:28:07 event -- scripts/common.sh@355 -- # echo 1 00:06:41.259 12:28:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.259 12:28:07 event -- scripts/common.sh@366 -- # decimal 2 00:06:41.259 12:28:07 event -- scripts/common.sh@353 -- # local d=2 00:06:41.259 12:28:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.259 12:28:07 event -- scripts/common.sh@355 -- # echo 2 00:06:41.259 12:28:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.259 12:28:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.259 12:28:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.259 12:28:07 event -- scripts/common.sh@368 -- # return 0 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.259 --rc genhtml_branch_coverage=1 00:06:41.259 --rc genhtml_function_coverage=1 00:06:41.259 --rc genhtml_legend=1 00:06:41.259 --rc geninfo_all_blocks=1 00:06:41.259 --rc geninfo_unexecuted_blocks=1 00:06:41.259 00:06:41.259 ' 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.259 --rc genhtml_branch_coverage=1 00:06:41.259 --rc genhtml_function_coverage=1 00:06:41.259 --rc genhtml_legend=1 00:06:41.259 --rc geninfo_all_blocks=1 00:06:41.259 --rc geninfo_unexecuted_blocks=1 00:06:41.259 00:06:41.259 ' 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.259 --rc genhtml_branch_coverage=1 00:06:41.259 --rc genhtml_function_coverage=1 00:06:41.259 --rc genhtml_legend=1 00:06:41.259 --rc geninfo_all_blocks=1 00:06:41.259 --rc geninfo_unexecuted_blocks=1 00:06:41.259 00:06:41.259 ' 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.259 --rc genhtml_branch_coverage=1 00:06:41.259 --rc genhtml_function_coverage=1 00:06:41.259 --rc genhtml_legend=1 00:06:41.259 --rc geninfo_all_blocks=1 00:06:41.259 --rc geninfo_unexecuted_blocks=1 00:06:41.259 00:06:41.259 ' 00:06:41.259 12:28:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:41.259 12:28:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.259 12:28:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:41.259 12:28:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.259 12:28:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.259 ************************************ 00:06:41.259 START TEST event_perf 00:06:41.259 ************************************ 00:06:41.259 12:28:07 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.259 Running I/O for 1 seconds...[2024-12-16 12:28:07.185396] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.259 [2024-12-16 12:28:07.185465] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159137 ] 00:06:41.259 [2024-12-16 12:28:07.258215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.259 [2024-12-16 12:28:07.299489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.259 [2024-12-16 12:28:07.299594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.260 [2024-12-16 12:28:07.299675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.260 [2024-12-16 12:28:07.299676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.637 Running I/O for 1 seconds... 00:06:42.637 lcore 0: 210637 00:06:42.637 lcore 1: 210636 00:06:42.637 lcore 2: 210637 00:06:42.637 lcore 3: 210636 00:06:42.637 done. 00:06:42.637 00:06:42.637 real 0m1.199s 00:06:42.638 user 0m4.099s 00:06:42.638 sys 0m0.096s 00:06:42.638 12:28:08 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.638 12:28:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 ************************************ 00:06:42.638 END TEST event_perf 00:06:42.638 ************************************ 00:06:42.638 12:28:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:42.638 12:28:08 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:42.638 12:28:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.638 12:28:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 ************************************ 00:06:42.638 START TEST event_reactor 00:06:42.638 ************************************ 00:06:42.638 12:28:08 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:42.638 [2024-12-16 12:28:08.456849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.638 [2024-12-16 12:28:08.456918] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159384 ] 00:06:42.638 [2024-12-16 12:28:08.528594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.638 [2024-12-16 12:28:08.569196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.575 test_start 00:06:43.575 oneshot 00:06:43.575 tick 100 00:06:43.575 tick 100 00:06:43.575 tick 250 00:06:43.575 tick 100 00:06:43.575 tick 100 00:06:43.575 tick 100 00:06:43.575 tick 250 00:06:43.575 tick 500 00:06:43.575 tick 100 00:06:43.575 tick 100 00:06:43.575 tick 250 00:06:43.575 tick 100 00:06:43.575 tick 100 00:06:43.575 test_end 00:06:43.575 00:06:43.575 real 0m1.192s 00:06:43.575 user 0m1.092s 00:06:43.575 sys 0m0.095s 00:06:43.575 12:28:09 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.575 12:28:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:43.575 ************************************ 00:06:43.575 END TEST event_reactor 00:06:43.575 ************************************ 00:06:43.834 12:28:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.834 12:28:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:43.834 12:28:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.834 12:28:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.834 ************************************ 00:06:43.834 START TEST event_reactor_perf 00:06:43.834 ************************************ 00:06:43.834 12:28:09 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.834 [2024-12-16 12:28:09.720546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.834 [2024-12-16 12:28:09.720616] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159627 ] 00:06:43.834 [2024-12-16 12:28:09.792969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.834 [2024-12-16 12:28:09.834202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.212 test_start 00:06:45.212 test_end 00:06:45.212 Performance: 502044 events per second 00:06:45.212 00:06:45.212 real 0m1.198s 00:06:45.212 user 0m1.103s 00:06:45.212 sys 0m0.090s 00:06:45.212 12:28:10 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.212 12:28:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.212 ************************************ 00:06:45.212 END TEST event_reactor_perf 00:06:45.212 ************************************ 00:06:45.212 12:28:10 event -- event/event.sh@49 -- # uname -s 00:06:45.212 12:28:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.212 12:28:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.212 12:28:10 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.212 12:28:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.212 12:28:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.212 ************************************ 00:06:45.212 START TEST event_scheduler 00:06:45.212 ************************************ 00:06:45.212 12:28:10 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.212 * Looking for test storage... 00:06:45.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.212 12:28:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.212 --rc genhtml_branch_coverage=1 00:06:45.212 --rc genhtml_function_coverage=1 00:06:45.212 --rc genhtml_legend=1 00:06:45.212 --rc geninfo_all_blocks=1 00:06:45.212 --rc geninfo_unexecuted_blocks=1 00:06:45.212 00:06:45.212 ' 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.212 --rc genhtml_branch_coverage=1 00:06:45.212 --rc genhtml_function_coverage=1 00:06:45.212 --rc genhtml_legend=1 00:06:45.212 --rc geninfo_all_blocks=1 00:06:45.212 --rc geninfo_unexecuted_blocks=1 00:06:45.212 00:06:45.212 ' 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.212 --rc genhtml_branch_coverage=1 00:06:45.212 --rc genhtml_function_coverage=1 00:06:45.212 --rc genhtml_legend=1 00:06:45.212 --rc geninfo_all_blocks=1 00:06:45.212 --rc geninfo_unexecuted_blocks=1 00:06:45.212 00:06:45.212 ' 00:06:45.212 12:28:11 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.212 --rc genhtml_branch_coverage=1 00:06:45.212 --rc genhtml_function_coverage=1 00:06:45.213 --rc genhtml_legend=1 00:06:45.213 --rc geninfo_all_blocks=1 00:06:45.213 --rc geninfo_unexecuted_blocks=1 00:06:45.213 00:06:45.213 ' 00:06:45.213 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.213 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.213 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=159915 00:06:45.213 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.213 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 159915 00:06:45.213 12:28:11 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 159915 ']' 00:06:45.213 12:28:11 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.213 12:28:11 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.213 12:28:11 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.213 12:28:11 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.213 12:28:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.213 [2024-12-16 12:28:11.182849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.213 [2024-12-16 12:28:11.182891] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159915 ] 00:06:45.213 [2024-12-16 12:28:11.248430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.473 [2024-12-16 12:28:11.291309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.473 [2024-12-16 12:28:11.291416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.473 [2024-12-16 12:28:11.291522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.473 [2024-12-16 12:28:11.291523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:45.473 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 [2024-12-16 12:28:11.352154] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:45.473 [2024-12-16 12:28:11.352169] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:45.473 [2024-12-16 12:28:11.352178] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:45.473 [2024-12-16 12:28:11.352183] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:45.473 [2024-12-16 12:28:11.352188] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 [2024-12-16 12:28:11.420132] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 ************************************ 00:06:45.473 START TEST scheduler_create_thread 00:06:45.473 ************************************ 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 2 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 3 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 4 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 5 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 6 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 7 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.473 8 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.473 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.732 9 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.732 10 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.732 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.668 12:28:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.668 12:28:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.668 12:28:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.668 12:28:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.668 12:28:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.236 12:28:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.236 12:28:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.236 12:28:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.236 12:28:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.614 12:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.614 12:28:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.614 12:28:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.614 12:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.614 12:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.180 12:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.180 00:06:49.180 real 0m3.560s 00:06:49.180 user 0m0.027s 00:06:49.180 sys 0m0.004s 00:06:49.180 12:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.180 12:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.180 ************************************ 00:06:49.180 END TEST scheduler_create_thread 00:06:49.180 ************************************ 00:06:49.180 12:28:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.180 12:28:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 159915 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 159915 ']' 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 159915 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159915 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159915' 00:06:49.180 killing process with pid 159915 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 159915 00:06:49.180 12:28:15 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 159915 00:06:49.438 [2024-12-16 12:28:15.396702] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:49.697 00:06:49.697 real 0m4.666s 00:06:49.697 user 0m8.439s 00:06:49.697 sys 0m0.370s 00:06:49.697 12:28:15 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.697 12:28:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.697 ************************************ 00:06:49.697 END TEST event_scheduler 00:06:49.697 ************************************ 00:06:49.697 12:28:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:49.697 12:28:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:49.697 12:28:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.697 12:28:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.697 12:28:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.697 ************************************ 00:06:49.697 START TEST app_repeat 00:06:49.697 ************************************ 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=160661 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 160661' 00:06:49.697 Process app_repeat pid: 160661 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:49.697 spdk_app_start Round 0 00:06:49.697 12:28:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 160661 /var/tmp/spdk-nbd.sock 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 160661 ']' 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.697 12:28:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.697 [2024-12-16 12:28:15.754254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.697 [2024-12-16 12:28:15.754308] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160661 ] 00:06:49.957 [2024-12-16 12:28:15.825550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.957 [2024-12-16 12:28:15.864879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.957 [2024-12-16 12:28:15.864881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.957 12:28:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.957 12:28:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:49.957 12:28:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.215 Malloc0 00:06:50.215 12:28:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.474 Malloc1 00:06:50.474 12:28:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.474 12:28:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.475 12:28:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.734 /dev/nbd0 00:06:50.734 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.734 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.734 1+0 records in 00:06:50.734 1+0 records out 00:06:50.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188188 s, 21.8 MB/s 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.734 12:28:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.734 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.734 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.734 12:28:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.992 /dev/nbd1 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.992 1+0 records in 00:06:50.992 1+0 records out 00:06:50.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200134 s, 20.5 MB/s 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.992 12:28:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.992 12:28:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.251 { 00:06:51.251 "nbd_device": "/dev/nbd0", 00:06:51.251 "bdev_name": "Malloc0" 00:06:51.251 }, 00:06:51.251 { 00:06:51.251 "nbd_device": "/dev/nbd1", 00:06:51.251 "bdev_name": "Malloc1" 00:06:51.251 } 00:06:51.251 ]' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.251 { 00:06:51.251 "nbd_device": "/dev/nbd0", 00:06:51.251 "bdev_name": "Malloc0" 00:06:51.251 }, 00:06:51.251 { 00:06:51.251 "nbd_device": "/dev/nbd1", 00:06:51.251 "bdev_name": "Malloc1" 00:06:51.251 } 00:06:51.251 ]' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.251 /dev/nbd1' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.251 /dev/nbd1' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.251 256+0 records in 00:06:51.251 256+0 records out 00:06:51.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108161 s, 96.9 MB/s 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.251 256+0 records in 00:06:51.251 256+0 records out 00:06:51.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137059 s, 76.5 MB/s 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.251 256+0 records in 00:06:51.251 256+0 records out 00:06:51.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145325 s, 72.2 MB/s 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.251 12:28:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.510 12:28:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.769 12:28:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.027 12:28:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.027 12:28:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.027 12:28:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.287 [2024-12-16 12:28:18.249694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.287 [2024-12-16 12:28:18.285123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.287 [2024-12-16 12:28:18.285133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.287 [2024-12-16 12:28:18.325474] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.287 [2024-12-16 12:28:18.325511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.573 12:28:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:55.573 12:28:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:55.573 spdk_app_start Round 1 00:06:55.573 12:28:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 160661 /var/tmp/spdk-nbd.sock 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 160661 ']' 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.573 12:28:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:55.573 12:28:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.573 Malloc0 00:06:55.573 12:28:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.832 Malloc1 00:06:55.832 12:28:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.832 12:28:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.091 /dev/nbd0 00:06:56.091 12:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.091 12:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.091 1+0 records in 00:06:56.091 1+0 records out 00:06:56.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196633 s, 20.8 MB/s 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.091 12:28:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.091 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.091 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.091 12:28:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.350 /dev/nbd1 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.350 1+0 records in 00:06:56.350 1+0 records out 00:06:56.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231482 s, 17.7 MB/s 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.350 12:28:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.350 12:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.609 { 00:06:56.609 "nbd_device": "/dev/nbd0", 00:06:56.609 "bdev_name": "Malloc0" 00:06:56.609 }, 00:06:56.609 { 00:06:56.609 "nbd_device": "/dev/nbd1", 00:06:56.609 "bdev_name": "Malloc1" 00:06:56.609 } 00:06:56.609 ]' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.609 { 00:06:56.609 "nbd_device": "/dev/nbd0", 00:06:56.609 "bdev_name": "Malloc0" 00:06:56.609 }, 00:06:56.609 { 00:06:56.609 "nbd_device": "/dev/nbd1", 00:06:56.609 "bdev_name": "Malloc1" 00:06:56.609 } 00:06:56.609 ]' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.609 /dev/nbd1' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.609 /dev/nbd1' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.609 256+0 records in 00:06:56.609 256+0 records out 00:06:56.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108291 s, 96.8 MB/s 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.609 256+0 records in 00:06:56.609 256+0 records out 00:06:56.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137664 s, 76.2 MB/s 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.609 256+0 records in 00:06:56.609 256+0 records out 00:06:56.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142722 s, 73.5 MB/s 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.609 12:28:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.868 12:28:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.869 12:28:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.869 12:28:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.869 12:28:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.127 12:28:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.128 12:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.387 12:28:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.387 12:28:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.646 12:28:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.646 [2024-12-16 12:28:23.618996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.646 [2024-12-16 12:28:23.654771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.646 [2024-12-16 12:28:23.654770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.646 [2024-12-16 12:28:23.695604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.646 [2024-12-16 12:28:23.695642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.932 12:28:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.932 12:28:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:00.932 spdk_app_start Round 2 00:07:00.932 12:28:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 160661 /var/tmp/spdk-nbd.sock 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 160661 ']' 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.932 12:28:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:00.932 12:28:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.932 Malloc0 00:07:00.932 12:28:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.191 Malloc1 00:07:01.191 12:28:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.191 12:28:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.450 /dev/nbd0 00:07:01.450 12:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.450 12:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.451 1+0 records in 00:07:01.451 1+0 records out 00:07:01.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000115797 s, 35.4 MB/s 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.451 12:28:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:01.451 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.451 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.451 12:28:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.709 /dev/nbd1 00:07:01.709 12:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.709 12:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.709 12:28:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.710 1+0 records in 00:07:01.710 1+0 records out 00:07:01.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197311 s, 20.8 MB/s 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.710 12:28:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:01.710 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.710 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.710 12:28:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.710 12:28:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.710 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.968 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.968 { 00:07:01.968 "nbd_device": "/dev/nbd0", 00:07:01.968 "bdev_name": "Malloc0" 00:07:01.968 }, 00:07:01.968 { 00:07:01.968 "nbd_device": "/dev/nbd1", 00:07:01.968 "bdev_name": "Malloc1" 00:07:01.968 } 00:07:01.968 ]' 00:07:01.968 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.968 { 00:07:01.968 "nbd_device": "/dev/nbd0", 00:07:01.968 "bdev_name": "Malloc0" 00:07:01.968 }, 00:07:01.968 { 00:07:01.969 "nbd_device": "/dev/nbd1", 00:07:01.969 "bdev_name": "Malloc1" 00:07:01.969 } 00:07:01.969 ]' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.969 /dev/nbd1' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.969 /dev/nbd1' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.969 256+0 records in 00:07:01.969 256+0 records out 00:07:01.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100859 s, 104 MB/s 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.969 256+0 records in 00:07:01.969 256+0 records out 00:07:01.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013381 s, 78.4 MB/s 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.969 256+0 records in 00:07:01.969 256+0 records out 00:07:01.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014201 s, 73.8 MB/s 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.969 12:28:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.228 12:28:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.487 12:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.747 12:28:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.747 12:28:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.747 12:28:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.006 [2024-12-16 12:28:28.950632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.006 [2024-12-16 12:28:28.986395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.006 [2024-12-16 12:28:28.986396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.006 [2024-12-16 12:28:29.026232] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.006 [2024-12-16 12:28:29.026272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.299 12:28:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 160661 /var/tmp/spdk-nbd.sock 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 160661 ']' 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:06.299 12:28:31 event.app_repeat -- event/event.sh@39 -- # killprocess 160661 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 160661 ']' 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 160661 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.299 12:28:31 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 160661 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 160661' 00:07:06.299 killing process with pid 160661 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@969 -- # kill 160661 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@974 -- # wait 160661 00:07:06.299 spdk_app_start is called in Round 0. 00:07:06.299 Shutdown signal received, stop current app iteration 00:07:06.299 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:06.299 spdk_app_start is called in Round 1. 00:07:06.299 Shutdown signal received, stop current app iteration 00:07:06.299 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:06.299 spdk_app_start is called in Round 2. 00:07:06.299 Shutdown signal received, stop current app iteration 00:07:06.299 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:06.299 spdk_app_start is called in Round 3. 00:07:06.299 Shutdown signal received, stop current app iteration 00:07:06.299 12:28:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:06.299 12:28:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:06.299 00:07:06.299 real 0m16.489s 00:07:06.299 user 0m36.179s 00:07:06.299 sys 0m2.608s 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.299 12:28:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.299 ************************************ 00:07:06.299 END TEST app_repeat 00:07:06.299 ************************************ 00:07:06.299 12:28:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:06.299 12:28:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.299 12:28:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.299 12:28:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.299 12:28:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.299 ************************************ 00:07:06.299 START TEST cpu_locks 00:07:06.299 ************************************ 00:07:06.299 12:28:32 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.559 * Looking for test storage... 00:07:06.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.559 12:28:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.559 --rc genhtml_branch_coverage=1 00:07:06.559 --rc genhtml_function_coverage=1 00:07:06.559 --rc genhtml_legend=1 00:07:06.559 --rc geninfo_all_blocks=1 00:07:06.559 --rc geninfo_unexecuted_blocks=1 00:07:06.559 00:07:06.559 ' 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.559 --rc genhtml_branch_coverage=1 00:07:06.559 --rc genhtml_function_coverage=1 00:07:06.559 --rc genhtml_legend=1 00:07:06.559 --rc geninfo_all_blocks=1 00:07:06.559 --rc geninfo_unexecuted_blocks=1 00:07:06.559 00:07:06.559 ' 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.559 --rc genhtml_branch_coverage=1 00:07:06.559 --rc genhtml_function_coverage=1 00:07:06.559 --rc genhtml_legend=1 00:07:06.559 --rc geninfo_all_blocks=1 00:07:06.559 --rc geninfo_unexecuted_blocks=1 00:07:06.559 00:07:06.559 ' 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.559 --rc genhtml_branch_coverage=1 00:07:06.559 --rc genhtml_function_coverage=1 00:07:06.559 --rc genhtml_legend=1 00:07:06.559 --rc geninfo_all_blocks=1 00:07:06.559 --rc geninfo_unexecuted_blocks=1 00:07:06.559 00:07:06.559 ' 00:07:06.559 12:28:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:06.559 12:28:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:06.559 12:28:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:06.559 12:28:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.559 12:28:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.559 ************************************ 00:07:06.559 START TEST default_locks 00:07:06.559 ************************************ 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=163823 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 163823 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 163823 ']' 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.559 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.559 [2024-12-16 12:28:32.542358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.559 [2024-12-16 12:28:32.542403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163823 ] 00:07:06.559 [2024-12-16 12:28:32.607635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.819 [2024-12-16 12:28:32.647970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.819 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.819 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:06.819 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 163823 00:07:06.819 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 163823 00:07:06.819 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.078 lslocks: write error 00:07:07.078 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 163823 00:07:07.078 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 163823 ']' 00:07:07.078 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 163823 00:07:07.078 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:07.078 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.338 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163823 00:07:07.338 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.338 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.338 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163823' 00:07:07.338 killing process with pid 163823 00:07:07.338 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 163823 00:07:07.338 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 163823 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 163823 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 163823 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 163823 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 163823 ']' 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (163823) - No such process 00:07:07.598 ERROR: process (pid: 163823) is no longer running 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.598 00:07:07.598 real 0m1.023s 00:07:07.598 user 0m0.957s 00:07:07.598 sys 0m0.471s 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.598 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.598 ************************************ 00:07:07.598 END TEST default_locks 00:07:07.598 ************************************ 00:07:07.598 12:28:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:07.598 12:28:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.598 12:28:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.598 12:28:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.598 ************************************ 00:07:07.598 START TEST default_locks_via_rpc 00:07:07.598 ************************************ 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=163940 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 163940 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 163940 ']' 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.598 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.598 [2024-12-16 12:28:33.634530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.598 [2024-12-16 12:28:33.634570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163940 ] 00:07:07.858 [2024-12-16 12:28:33.702774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.858 [2024-12-16 12:28:33.742678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.117 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.118 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 163940 00:07:08.118 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 163940 00:07:08.118 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.377 12:28:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 163940 00:07:08.377 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 163940 ']' 00:07:08.377 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 163940 00:07:08.377 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:08.377 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.377 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163940 00:07:08.637 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.637 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.637 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163940' 00:07:08.637 killing process with pid 163940 00:07:08.637 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 163940 00:07:08.637 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 163940 00:07:08.896 00:07:08.896 real 0m1.193s 00:07:08.896 user 0m1.158s 00:07:08.896 sys 0m0.545s 00:07:08.897 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.897 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.897 ************************************ 00:07:08.897 END TEST default_locks_via_rpc 00:07:08.897 ************************************ 00:07:08.897 12:28:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.897 12:28:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.897 12:28:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.897 12:28:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.897 ************************************ 00:07:08.897 START TEST non_locking_app_on_locked_coremask 00:07:08.897 ************************************ 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=164137 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 164137 /var/tmp/spdk.sock 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 164137 ']' 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.897 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.897 [2024-12-16 12:28:34.890892] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.897 [2024-12-16 12:28:34.890933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164137 ] 00:07:08.897 [2024-12-16 12:28:34.960112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.163 [2024-12-16 12:28:35.000272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=164338 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 164338 /var/tmp/spdk2.sock 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 164338 ']' 00:07:09.163 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.164 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.164 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.164 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.164 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.423 [2024-12-16 12:28:35.245555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.423 [2024-12-16 12:28:35.245601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164338 ] 00:07:09.423 [2024-12-16 12:28:35.317889] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.423 [2024-12-16 12:28:35.317910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.423 [2024-12-16 12:28:35.396179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.361 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.361 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:10.361 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 164137 00:07:10.361 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 164137 00:07:10.361 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.621 lslocks: write error 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 164137 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 164137 ']' 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 164137 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164137 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164137' 00:07:10.621 killing process with pid 164137 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 164137 00:07:10.621 12:28:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 164137 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 164338 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 164338 ']' 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 164338 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164338 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164338' 00:07:11.190 killing process with pid 164338 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 164338 00:07:11.190 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 164338 00:07:11.759 00:07:11.759 real 0m2.703s 00:07:11.759 user 0m2.823s 00:07:11.759 sys 0m0.927s 00:07:11.759 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.759 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.759 ************************************ 00:07:11.759 END TEST non_locking_app_on_locked_coremask 00:07:11.759 ************************************ 00:07:11.759 12:28:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.759 12:28:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.759 12:28:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.759 12:28:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.759 ************************************ 00:07:11.759 START TEST locking_app_on_unlocked_coremask 00:07:11.759 ************************************ 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=164637 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 164637 /var/tmp/spdk.sock 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 164637 ']' 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.760 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.760 [2024-12-16 12:28:37.662922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.760 [2024-12-16 12:28:37.662964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164637 ] 00:07:11.760 [2024-12-16 12:28:37.732850] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.760 [2024-12-16 12:28:37.732873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.760 [2024-12-16 12:28:37.772735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=164832 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 164832 /var/tmp/spdk2.sock 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 164832 ']' 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.019 12:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.019 [2024-12-16 12:28:38.034413] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.019 [2024-12-16 12:28:38.034463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164832 ] 00:07:12.279 [2024-12-16 12:28:38.108329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.279 [2024-12-16 12:28:38.182607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.847 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.847 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.847 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 164832 00:07:12.847 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 164832 00:07:12.847 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.416 lslocks: write error 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 164637 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 164637 ']' 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 164637 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164637 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164637' 00:07:13.416 killing process with pid 164637 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 164637 00:07:13.416 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 164637 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 164832 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 164832 ']' 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 164832 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164832 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164832' 00:07:14.353 killing process with pid 164832 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 164832 00:07:14.353 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 164832 00:07:14.613 00:07:14.613 real 0m2.861s 00:07:14.613 user 0m2.981s 00:07:14.613 sys 0m0.960s 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.613 ************************************ 00:07:14.613 END TEST locking_app_on_unlocked_coremask 00:07:14.613 ************************************ 00:07:14.613 12:28:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.613 12:28:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.613 12:28:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.613 12:28:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.613 ************************************ 00:07:14.613 START TEST locking_app_on_locked_coremask 00:07:14.613 ************************************ 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=165246 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 165246 /var/tmp/spdk.sock 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 165246 ']' 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.613 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.613 [2024-12-16 12:28:40.593624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.613 [2024-12-16 12:28:40.593672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165246 ] 00:07:14.613 [2024-12-16 12:28:40.663731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.872 [2024-12-16 12:28:40.702733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=165328 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 165328 /var/tmp/spdk2.sock 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 165328 /var/tmp/spdk2.sock 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 165328 /var/tmp/spdk2.sock 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 165328 ']' 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.872 12:28:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.131 [2024-12-16 12:28:40.959329] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:15.131 [2024-12-16 12:28:40.959369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165328 ] 00:07:15.131 [2024-12-16 12:28:41.035324] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 165246 has claimed it. 00:07:15.131 [2024-12-16 12:28:41.035362] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (165328) - No such process 00:07:15.700 ERROR: process (pid: 165328) is no longer running 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 165246 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 165246 00:07:15.700 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.959 lslocks: write error 00:07:15.959 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 165246 00:07:15.959 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 165246 ']' 00:07:15.959 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 165246 00:07:15.959 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.959 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.959 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165246 00:07:16.217 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.217 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.217 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165246' 00:07:16.217 killing process with pid 165246 00:07:16.217 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 165246 00:07:16.217 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 165246 00:07:16.476 00:07:16.476 real 0m1.824s 00:07:16.476 user 0m1.936s 00:07:16.476 sys 0m0.628s 00:07:16.476 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.476 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.476 ************************************ 00:07:16.476 END TEST locking_app_on_locked_coremask 00:07:16.476 ************************************ 00:07:16.476 12:28:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:16.476 12:28:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.476 12:28:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.476 12:28:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.476 ************************************ 00:07:16.476 START TEST locking_overlapped_coremask 00:07:16.476 ************************************ 00:07:16.476 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:16.476 12:28:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=165584 00:07:16.476 12:28:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 165584 /var/tmp/spdk.sock 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 165584 ']' 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.477 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.477 [2024-12-16 12:28:42.487839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.477 [2024-12-16 12:28:42.487887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165584 ] 00:07:16.736 [2024-12-16 12:28:42.556701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.736 [2024-12-16 12:28:42.594520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.736 [2024-12-16 12:28:42.594625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.736 [2024-12-16 12:28:42.594626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=165593 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 165593 /var/tmp/spdk2.sock 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 165593 /var/tmp/spdk2.sock 00:07:16.736 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 165593 /var/tmp/spdk2.sock 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 165593 ']' 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.996 12:28:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.996 [2024-12-16 12:28:42.852881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.996 [2024-12-16 12:28:42.852925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165593 ] 00:07:16.996 [2024-12-16 12:28:42.930143] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 165584 has claimed it. 00:07:16.996 [2024-12-16 12:28:42.930181] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (165593) - No such process 00:07:17.565 ERROR: process (pid: 165593) is no longer running 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 165584 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 165584 ']' 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 165584 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165584 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.565 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165584' 00:07:17.565 killing process with pid 165584 00:07:17.566 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 165584 00:07:17.566 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 165584 00:07:17.825 00:07:17.825 real 0m1.433s 00:07:17.825 user 0m3.908s 00:07:17.825 sys 0m0.409s 00:07:17.825 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.825 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.825 ************************************ 00:07:17.825 END TEST locking_overlapped_coremask 00:07:17.825 ************************************ 00:07:18.084 12:28:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.084 12:28:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.084 12:28:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.084 12:28:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.084 ************************************ 00:07:18.084 START TEST locking_overlapped_coremask_via_rpc 00:07:18.084 ************************************ 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=165846 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 165846 /var/tmp/spdk.sock 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 165846 ']' 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.084 12:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.084 [2024-12-16 12:28:43.989827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.085 [2024-12-16 12:28:43.989872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165846 ] 00:07:18.085 [2024-12-16 12:28:44.057484] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.085 [2024-12-16 12:28:44.057507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.085 [2024-12-16 12:28:44.096109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.085 [2024-12-16 12:28:44.096220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.085 [2024-12-16 12:28:44.096221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=165862 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 165862 /var/tmp/spdk2.sock 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 165862 ']' 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.344 12:28:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.344 [2024-12-16 12:28:44.351160] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.344 [2024-12-16 12:28:44.351204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165862 ] 00:07:18.603 [2024-12-16 12:28:44.425299] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.603 [2024-12-16 12:28:44.425328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.603 [2024-12-16 12:28:44.505706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.603 [2024-12-16 12:28:44.509156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.603 [2024-12-16 12:28:44.509156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:19.178 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.179 [2024-12-16 12:28:45.201181] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 165846 has claimed it. 00:07:19.179 request: 00:07:19.179 { 00:07:19.179 "method": "framework_enable_cpumask_locks", 00:07:19.179 "req_id": 1 00:07:19.179 } 00:07:19.179 Got JSON-RPC error response 00:07:19.179 response: 00:07:19.179 { 00:07:19.179 "code": -32603, 00:07:19.179 "message": "Failed to claim CPU core: 2" 00:07:19.179 } 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 165846 /var/tmp/spdk.sock 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 165846 ']' 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.179 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 165862 /var/tmp/spdk2.sock 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 165862 ']' 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.440 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.700 00:07:19.700 real 0m1.674s 00:07:19.700 user 0m0.802s 00:07:19.700 sys 0m0.148s 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.700 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.700 ************************************ 00:07:19.700 END TEST locking_overlapped_coremask_via_rpc 00:07:19.700 ************************************ 00:07:19.700 12:28:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:19.700 12:28:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 165846 ]] 00:07:19.700 12:28:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 165846 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 165846 ']' 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 165846 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165846 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165846' 00:07:19.700 killing process with pid 165846 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 165846 00:07:19.700 12:28:45 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 165846 00:07:20.270 12:28:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 165862 ]] 00:07:20.270 12:28:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 165862 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 165862 ']' 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 165862 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165862 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165862' 00:07:20.270 killing process with pid 165862 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 165862 00:07:20.270 12:28:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 165862 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 165846 ]] 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 165846 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 165846 ']' 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 165846 00:07:20.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (165846) - No such process 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 165846 is not found' 00:07:20.530 Process with pid 165846 is not found 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 165862 ]] 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 165862 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 165862 ']' 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 165862 00:07:20.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (165862) - No such process 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 165862 is not found' 00:07:20.530 Process with pid 165862 is not found 00:07:20.530 12:28:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.530 00:07:20.530 real 0m14.129s 00:07:20.530 user 0m24.293s 00:07:20.530 sys 0m5.065s 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.530 12:28:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.530 ************************************ 00:07:20.530 END TEST cpu_locks 00:07:20.530 ************************************ 00:07:20.530 00:07:20.530 real 0m39.489s 00:07:20.530 user 1m15.487s 00:07:20.530 sys 0m8.698s 00:07:20.530 12:28:46 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.530 12:28:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.530 ************************************ 00:07:20.530 END TEST event 00:07:20.530 ************************************ 00:07:20.530 12:28:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:20.530 12:28:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.530 12:28:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.530 12:28:46 -- common/autotest_common.sh@10 -- # set +x 00:07:20.530 ************************************ 00:07:20.530 START TEST thread 00:07:20.530 ************************************ 00:07:20.530 12:28:46 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:20.790 * Looking for test storage... 00:07:20.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.790 12:28:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.790 12:28:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.790 12:28:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.790 12:28:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.790 12:28:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.790 12:28:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.790 12:28:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.790 12:28:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.790 12:28:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.790 12:28:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.790 12:28:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.790 12:28:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:20.790 12:28:46 thread -- scripts/common.sh@345 -- # : 1 00:07:20.790 12:28:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.790 12:28:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.790 12:28:46 thread -- scripts/common.sh@365 -- # decimal 1 00:07:20.790 12:28:46 thread -- scripts/common.sh@353 -- # local d=1 00:07:20.790 12:28:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.790 12:28:46 thread -- scripts/common.sh@355 -- # echo 1 00:07:20.790 12:28:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.790 12:28:46 thread -- scripts/common.sh@366 -- # decimal 2 00:07:20.790 12:28:46 thread -- scripts/common.sh@353 -- # local d=2 00:07:20.790 12:28:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.790 12:28:46 thread -- scripts/common.sh@355 -- # echo 2 00:07:20.790 12:28:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.790 12:28:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.790 12:28:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.790 12:28:46 thread -- scripts/common.sh@368 -- # return 0 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.790 --rc genhtml_branch_coverage=1 00:07:20.790 --rc genhtml_function_coverage=1 00:07:20.790 --rc genhtml_legend=1 00:07:20.790 --rc geninfo_all_blocks=1 00:07:20.790 --rc geninfo_unexecuted_blocks=1 00:07:20.790 00:07:20.790 ' 00:07:20.790 12:28:46 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.790 --rc genhtml_branch_coverage=1 00:07:20.790 --rc genhtml_function_coverage=1 00:07:20.790 --rc genhtml_legend=1 00:07:20.790 --rc geninfo_all_blocks=1 00:07:20.791 --rc geninfo_unexecuted_blocks=1 00:07:20.791 00:07:20.791 ' 00:07:20.791 12:28:46 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.791 --rc genhtml_branch_coverage=1 00:07:20.791 --rc genhtml_function_coverage=1 00:07:20.791 --rc genhtml_legend=1 00:07:20.791 --rc geninfo_all_blocks=1 00:07:20.791 --rc geninfo_unexecuted_blocks=1 00:07:20.791 00:07:20.791 ' 00:07:20.791 12:28:46 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.791 --rc genhtml_branch_coverage=1 00:07:20.791 --rc genhtml_function_coverage=1 00:07:20.791 --rc genhtml_legend=1 00:07:20.791 --rc geninfo_all_blocks=1 00:07:20.791 --rc geninfo_unexecuted_blocks=1 00:07:20.791 00:07:20.791 ' 00:07:20.791 12:28:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.791 12:28:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:20.791 12:28:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.791 12:28:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.791 ************************************ 00:07:20.791 START TEST thread_poller_perf 00:07:20.791 ************************************ 00:07:20.791 12:28:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.791 [2024-12-16 12:28:46.742267] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:20.791 [2024-12-16 12:28:46.742335] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166417 ] 00:07:20.791 [2024-12-16 12:28:46.812906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.791 [2024-12-16 12:28:46.851089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.791 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:22.169 [2024-12-16T11:28:48.236Z] ====================================== 00:07:22.169 [2024-12-16T11:28:48.236Z] busy:2104759848 (cyc) 00:07:22.169 [2024-12-16T11:28:48.236Z] total_run_count: 417000 00:07:22.169 [2024-12-16T11:28:48.236Z] tsc_hz: 2100000000 (cyc) 00:07:22.169 [2024-12-16T11:28:48.236Z] ====================================== 00:07:22.169 [2024-12-16T11:28:48.236Z] poller_cost: 5047 (cyc), 2403 (nsec) 00:07:22.169 00:07:22.169 real 0m1.193s 00:07:22.169 user 0m1.103s 00:07:22.169 sys 0m0.086s 00:07:22.169 12:28:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.169 12:28:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.169 ************************************ 00:07:22.169 END TEST thread_poller_perf 00:07:22.169 ************************************ 00:07:22.169 12:28:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.169 12:28:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:22.169 12:28:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.169 12:28:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.169 ************************************ 00:07:22.169 START TEST thread_poller_perf 00:07:22.169 ************************************ 00:07:22.170 12:28:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.170 [2024-12-16 12:28:48.003651] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.170 [2024-12-16 12:28:48.003719] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166660 ] 00:07:22.170 [2024-12-16 12:28:48.076826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.170 [2024-12-16 12:28:48.117042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.170 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.549 [2024-12-16T11:28:49.616Z] ====================================== 00:07:23.549 [2024-12-16T11:28:49.616Z] busy:2101490486 (cyc) 00:07:23.549 [2024-12-16T11:28:49.616Z] total_run_count: 5393000 00:07:23.549 [2024-12-16T11:28:49.616Z] tsc_hz: 2100000000 (cyc) 00:07:23.549 [2024-12-16T11:28:49.616Z] ====================================== 00:07:23.549 [2024-12-16T11:28:49.616Z] poller_cost: 389 (cyc), 185 (nsec) 00:07:23.549 00:07:23.549 real 0m1.198s 00:07:23.549 user 0m1.103s 00:07:23.549 sys 0m0.091s 00:07:23.549 12:28:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.549 12:28:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.549 ************************************ 00:07:23.549 END TEST thread_poller_perf 00:07:23.549 ************************************ 00:07:23.549 12:28:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.549 00:07:23.549 real 0m2.698s 00:07:23.549 user 0m2.363s 00:07:23.549 sys 0m0.347s 00:07:23.549 12:28:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.549 12:28:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.549 ************************************ 00:07:23.549 END TEST thread 00:07:23.549 ************************************ 00:07:23.549 12:28:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:23.549 12:28:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.549 12:28:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.549 12:28:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.549 12:28:49 -- common/autotest_common.sh@10 -- # set +x 00:07:23.549 ************************************ 00:07:23.549 START TEST app_cmdline 00:07:23.549 ************************************ 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.549 * Looking for test storage... 00:07:23.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.549 12:28:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.549 --rc genhtml_branch_coverage=1 00:07:23.549 --rc genhtml_function_coverage=1 00:07:23.549 --rc genhtml_legend=1 00:07:23.549 --rc geninfo_all_blocks=1 00:07:23.549 --rc geninfo_unexecuted_blocks=1 00:07:23.549 00:07:23.549 ' 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.549 --rc genhtml_branch_coverage=1 00:07:23.549 --rc genhtml_function_coverage=1 00:07:23.549 --rc genhtml_legend=1 00:07:23.549 --rc geninfo_all_blocks=1 00:07:23.549 --rc geninfo_unexecuted_blocks=1 00:07:23.549 00:07:23.549 ' 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.549 --rc genhtml_branch_coverage=1 00:07:23.549 --rc genhtml_function_coverage=1 00:07:23.549 --rc genhtml_legend=1 00:07:23.549 --rc geninfo_all_blocks=1 00:07:23.549 --rc geninfo_unexecuted_blocks=1 00:07:23.549 00:07:23.549 ' 00:07:23.549 12:28:49 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.549 --rc genhtml_branch_coverage=1 00:07:23.549 --rc genhtml_function_coverage=1 00:07:23.549 --rc genhtml_legend=1 00:07:23.550 --rc geninfo_all_blocks=1 00:07:23.550 --rc geninfo_unexecuted_blocks=1 00:07:23.550 00:07:23.550 ' 00:07:23.550 12:28:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.550 12:28:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=166958 00:07:23.550 12:28:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.550 12:28:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 166958 00:07:23.550 12:28:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 166958 ']' 00:07:23.550 12:28:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.550 12:28:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.550 12:28:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.550 12:28:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.550 12:28:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.550 [2024-12-16 12:28:49.510897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.550 [2024-12-16 12:28:49.510943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166958 ] 00:07:23.550 [2024-12-16 12:28:49.575383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.550 [2024-12-16 12:28:49.614278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.809 12:28:49 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.809 12:28:49 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:23.809 12:28:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:24.068 { 00:07:24.068 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:24.068 "fields": { 00:07:24.068 "major": 24, 00:07:24.068 "minor": 9, 00:07:24.068 "patch": 1, 00:07:24.068 "suffix": "-pre", 00:07:24.068 "commit": "b18e1bd62" 00:07:24.068 } 00:07:24.068 } 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.068 12:28:50 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 12:28:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.069 12:28:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.069 12:28:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.069 12:28:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.069 12:28:50 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.328 request: 00:07:24.328 { 00:07:24.328 "method": "env_dpdk_get_mem_stats", 00:07:24.328 "req_id": 1 00:07:24.328 } 00:07:24.328 Got JSON-RPC error response 00:07:24.328 response: 00:07:24.328 { 00:07:24.328 "code": -32601, 00:07:24.328 "message": "Method not found" 00:07:24.328 } 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.328 12:28:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 166958 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 166958 ']' 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 166958 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 166958 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 166958' 00:07:24.328 killing process with pid 166958 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@969 -- # kill 166958 00:07:24.328 12:28:50 app_cmdline -- common/autotest_common.sh@974 -- # wait 166958 00:07:24.897 00:07:24.897 real 0m1.377s 00:07:24.897 user 0m1.611s 00:07:24.897 sys 0m0.457s 00:07:24.897 12:28:50 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.897 12:28:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.897 ************************************ 00:07:24.897 END TEST app_cmdline 00:07:24.897 ************************************ 00:07:24.897 12:28:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:24.897 12:28:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.897 12:28:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.897 12:28:50 -- common/autotest_common.sh@10 -- # set +x 00:07:24.897 ************************************ 00:07:24.897 START TEST version 00:07:24.897 ************************************ 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:24.897 * Looking for test storage... 00:07:24.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.897 12:28:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.897 12:28:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.897 12:28:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.897 12:28:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.897 12:28:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.897 12:28:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.897 12:28:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.897 12:28:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.897 12:28:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.897 12:28:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.897 12:28:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.897 12:28:50 version -- scripts/common.sh@344 -- # case "$op" in 00:07:24.897 12:28:50 version -- scripts/common.sh@345 -- # : 1 00:07:24.897 12:28:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.897 12:28:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.897 12:28:50 version -- scripts/common.sh@365 -- # decimal 1 00:07:24.897 12:28:50 version -- scripts/common.sh@353 -- # local d=1 00:07:24.897 12:28:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.897 12:28:50 version -- scripts/common.sh@355 -- # echo 1 00:07:24.897 12:28:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.897 12:28:50 version -- scripts/common.sh@366 -- # decimal 2 00:07:24.897 12:28:50 version -- scripts/common.sh@353 -- # local d=2 00:07:24.897 12:28:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.897 12:28:50 version -- scripts/common.sh@355 -- # echo 2 00:07:24.897 12:28:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.897 12:28:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.897 12:28:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.897 12:28:50 version -- scripts/common.sh@368 -- # return 0 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.897 --rc genhtml_branch_coverage=1 00:07:24.897 --rc genhtml_function_coverage=1 00:07:24.897 --rc genhtml_legend=1 00:07:24.897 --rc geninfo_all_blocks=1 00:07:24.897 --rc geninfo_unexecuted_blocks=1 00:07:24.897 00:07:24.897 ' 00:07:24.897 12:28:50 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.897 --rc genhtml_branch_coverage=1 00:07:24.897 --rc genhtml_function_coverage=1 00:07:24.897 --rc genhtml_legend=1 00:07:24.898 --rc geninfo_all_blocks=1 00:07:24.898 --rc geninfo_unexecuted_blocks=1 00:07:24.898 00:07:24.898 ' 00:07:24.898 12:28:50 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.898 --rc genhtml_branch_coverage=1 00:07:24.898 --rc genhtml_function_coverage=1 00:07:24.898 --rc genhtml_legend=1 00:07:24.898 --rc geninfo_all_blocks=1 00:07:24.898 --rc geninfo_unexecuted_blocks=1 00:07:24.898 00:07:24.898 ' 00:07:24.898 12:28:50 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.898 --rc genhtml_branch_coverage=1 00:07:24.898 --rc genhtml_function_coverage=1 00:07:24.898 --rc genhtml_legend=1 00:07:24.898 --rc geninfo_all_blocks=1 00:07:24.898 --rc geninfo_unexecuted_blocks=1 00:07:24.898 00:07:24.898 ' 00:07:24.898 12:28:50 version -- app/version.sh@17 -- # get_header_version major 00:07:24.898 12:28:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # cut -f2 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.898 12:28:50 version -- app/version.sh@17 -- # major=24 00:07:24.898 12:28:50 version -- app/version.sh@18 -- # get_header_version minor 00:07:24.898 12:28:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # cut -f2 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.898 12:28:50 version -- app/version.sh@18 -- # minor=9 00:07:24.898 12:28:50 version -- app/version.sh@19 -- # get_header_version patch 00:07:24.898 12:28:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # cut -f2 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.898 12:28:50 version -- app/version.sh@19 -- # patch=1 00:07:24.898 12:28:50 version -- app/version.sh@20 -- # get_header_version suffix 00:07:24.898 12:28:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # cut -f2 00:07:24.898 12:28:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.898 12:28:50 version -- app/version.sh@20 -- # suffix=-pre 00:07:24.898 12:28:50 version -- app/version.sh@22 -- # version=24.9 00:07:24.898 12:28:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:24.898 12:28:50 version -- app/version.sh@25 -- # version=24.9.1 00:07:24.898 12:28:50 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:24.898 12:28:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:24.898 12:28:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.158 12:28:50 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:25.158 12:28:50 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:25.158 00:07:25.158 real 0m0.242s 00:07:25.158 user 0m0.146s 00:07:25.158 sys 0m0.139s 00:07:25.158 12:28:50 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.158 12:28:50 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.158 ************************************ 00:07:25.158 END TEST version 00:07:25.158 ************************************ 00:07:25.158 12:28:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:25.158 12:28:51 -- spdk/autotest.sh@194 -- # uname -s 00:07:25.158 12:28:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:25.158 12:28:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:25.158 12:28:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:25.158 12:28:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:25.158 12:28:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.158 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:07:25.158 12:28:51 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:25.158 12:28:51 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:25.158 12:28:51 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.158 12:28:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.158 12:28:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.158 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:07:25.158 ************************************ 00:07:25.158 START TEST nvmf_tcp 00:07:25.158 ************************************ 00:07:25.158 12:28:51 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.158 * Looking for test storage... 00:07:25.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.158 12:28:51 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.158 12:28:51 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.158 12:28:51 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.418 12:28:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.418 --rc genhtml_branch_coverage=1 00:07:25.418 --rc genhtml_function_coverage=1 00:07:25.418 --rc genhtml_legend=1 00:07:25.418 --rc geninfo_all_blocks=1 00:07:25.418 --rc geninfo_unexecuted_blocks=1 00:07:25.418 00:07:25.418 ' 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.418 --rc genhtml_branch_coverage=1 00:07:25.418 --rc genhtml_function_coverage=1 00:07:25.418 --rc genhtml_legend=1 00:07:25.418 --rc geninfo_all_blocks=1 00:07:25.418 --rc geninfo_unexecuted_blocks=1 00:07:25.418 00:07:25.418 ' 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.418 --rc genhtml_branch_coverage=1 00:07:25.418 --rc genhtml_function_coverage=1 00:07:25.418 --rc genhtml_legend=1 00:07:25.418 --rc geninfo_all_blocks=1 00:07:25.418 --rc geninfo_unexecuted_blocks=1 00:07:25.418 00:07:25.418 ' 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.418 --rc genhtml_branch_coverage=1 00:07:25.418 --rc genhtml_function_coverage=1 00:07:25.418 --rc genhtml_legend=1 00:07:25.418 --rc geninfo_all_blocks=1 00:07:25.418 --rc geninfo_unexecuted_blocks=1 00:07:25.418 00:07:25.418 ' 00:07:25.418 12:28:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.418 12:28:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.418 12:28:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.418 12:28:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 ************************************ 00:07:25.418 START TEST nvmf_target_core 00:07:25.418 ************************************ 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:25.418 * Looking for test storage... 00:07:25.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.418 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.418 --rc genhtml_branch_coverage=1 00:07:25.418 --rc genhtml_function_coverage=1 00:07:25.418 --rc genhtml_legend=1 00:07:25.418 --rc geninfo_all_blocks=1 00:07:25.418 --rc geninfo_unexecuted_blocks=1 00:07:25.419 00:07:25.419 ' 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.419 --rc genhtml_branch_coverage=1 00:07:25.419 --rc genhtml_function_coverage=1 00:07:25.419 --rc genhtml_legend=1 00:07:25.419 --rc geninfo_all_blocks=1 00:07:25.419 --rc geninfo_unexecuted_blocks=1 00:07:25.419 00:07:25.419 ' 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.419 --rc genhtml_branch_coverage=1 00:07:25.419 --rc genhtml_function_coverage=1 00:07:25.419 --rc genhtml_legend=1 00:07:25.419 --rc geninfo_all_blocks=1 00:07:25.419 --rc geninfo_unexecuted_blocks=1 00:07:25.419 00:07:25.419 ' 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.419 --rc genhtml_branch_coverage=1 00:07:25.419 --rc genhtml_function_coverage=1 00:07:25.419 --rc genhtml_legend=1 00:07:25.419 --rc geninfo_all_blocks=1 00:07:25.419 --rc geninfo_unexecuted_blocks=1 00:07:25.419 00:07:25.419 ' 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.419 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.679 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.680 ************************************ 00:07:25.680 START TEST nvmf_abort 00:07:25.680 ************************************ 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.680 * Looking for test storage... 00:07:25.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.680 --rc genhtml_branch_coverage=1 00:07:25.680 --rc genhtml_function_coverage=1 00:07:25.680 --rc genhtml_legend=1 00:07:25.680 --rc geninfo_all_blocks=1 00:07:25.680 --rc geninfo_unexecuted_blocks=1 00:07:25.680 00:07:25.680 ' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.680 --rc genhtml_branch_coverage=1 00:07:25.680 --rc genhtml_function_coverage=1 00:07:25.680 --rc genhtml_legend=1 00:07:25.680 --rc geninfo_all_blocks=1 00:07:25.680 --rc geninfo_unexecuted_blocks=1 00:07:25.680 00:07:25.680 ' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.680 --rc genhtml_branch_coverage=1 00:07:25.680 --rc genhtml_function_coverage=1 00:07:25.680 --rc genhtml_legend=1 00:07:25.680 --rc geninfo_all_blocks=1 00:07:25.680 --rc geninfo_unexecuted_blocks=1 00:07:25.680 00:07:25.680 ' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.680 --rc genhtml_branch_coverage=1 00:07:25.680 --rc genhtml_function_coverage=1 00:07:25.680 --rc genhtml_legend=1 00:07:25.680 --rc geninfo_all_blocks=1 00:07:25.680 --rc geninfo_unexecuted_blocks=1 00:07:25.680 00:07:25.680 ' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:25.680 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.681 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:25.681 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:25.681 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:25.681 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.681 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.681 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.941 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:25.941 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:25.941 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:25.941 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:32.526 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:32.526 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:32.526 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:32.527 Found net devices under 0000:af:00.0: cvl_0_0 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:32.527 Found net devices under 0000:af:00.1: cvl_0_1 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:07:32.527 00:07:32.527 --- 10.0.0.2 ping statistics --- 00:07:32.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.527 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:07:32.527 00:07:32.527 --- 10.0.0.1 ping statistics --- 00:07:32.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.527 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=170604 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 170604 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 170604 ']' 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.527 [2024-12-16 12:28:57.788867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:32.527 [2024-12-16 12:28:57.788908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.527 [2024-12-16 12:28:57.859900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.527 [2024-12-16 12:28:57.900944] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.527 [2024-12-16 12:28:57.900982] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.527 [2024-12-16 12:28:57.900992] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.527 [2024-12-16 12:28:57.900997] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.527 [2024-12-16 12:28:57.901002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.527 [2024-12-16 12:28:57.901132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.527 [2024-12-16 12:28:57.901236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.527 [2024-12-16 12:28:57.901238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.527 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.527 [2024-12-16 12:28:58.031814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.527 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.527 Malloc0 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 Delay0 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 [2024-12-16 12:28:58.103433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.528 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:32.528 [2024-12-16 12:28:58.220881] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:34.436 Initializing NVMe Controllers 00:07:34.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:34.436 controller IO queue size 128 less than required 00:07:34.436 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:34.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:34.436 Initialization complete. Launching workers. 00:07:34.436 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38603 00:07:34.436 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38664, failed to submit 62 00:07:34.436 success 38607, unsuccessful 57, failed 0 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:34.436 rmmod nvme_tcp 00:07:34.436 rmmod nvme_fabrics 00:07:34.436 rmmod nvme_keyring 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 170604 ']' 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 170604 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 170604 ']' 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 170604 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170604 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170604' 00:07:34.436 killing process with pid 170604 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 170604 00:07:34.436 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 170604 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.696 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:37.235 00:07:37.235 real 0m11.178s 00:07:37.235 user 0m11.682s 00:07:37.235 sys 0m5.178s 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.235 ************************************ 00:07:37.235 END TEST nvmf_abort 00:07:37.235 ************************************ 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.235 ************************************ 00:07:37.235 START TEST nvmf_ns_hotplug_stress 00:07:37.235 ************************************ 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:37.235 * Looking for test storage... 00:07:37.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.235 --rc genhtml_branch_coverage=1 00:07:37.235 --rc genhtml_function_coverage=1 00:07:37.235 --rc genhtml_legend=1 00:07:37.235 --rc geninfo_all_blocks=1 00:07:37.235 --rc geninfo_unexecuted_blocks=1 00:07:37.235 00:07:37.235 ' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.235 --rc genhtml_branch_coverage=1 00:07:37.235 --rc genhtml_function_coverage=1 00:07:37.235 --rc genhtml_legend=1 00:07:37.235 --rc geninfo_all_blocks=1 00:07:37.235 --rc geninfo_unexecuted_blocks=1 00:07:37.235 00:07:37.235 ' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.235 --rc genhtml_branch_coverage=1 00:07:37.235 --rc genhtml_function_coverage=1 00:07:37.235 --rc genhtml_legend=1 00:07:37.235 --rc geninfo_all_blocks=1 00:07:37.235 --rc geninfo_unexecuted_blocks=1 00:07:37.235 00:07:37.235 ' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.235 --rc genhtml_branch_coverage=1 00:07:37.235 --rc genhtml_function_coverage=1 00:07:37.235 --rc genhtml_legend=1 00:07:37.235 --rc geninfo_all_blocks=1 00:07:37.235 --rc geninfo_unexecuted_blocks=1 00:07:37.235 00:07:37.235 ' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.235 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:37.236 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:43.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:43.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:43.816 Found net devices under 0000:af:00.0: cvl_0_0 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:43.816 Found net devices under 0000:af:00.1: cvl_0_1 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.816 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:43.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:07:43.817 00:07:43.817 --- 10.0.0.2 ping statistics --- 00:07:43.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.817 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:43.817 00:07:43.817 --- 10.0.0.1 ping statistics --- 00:07:43.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.817 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=174591 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 174591 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 174591 ']' 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.817 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.817 [2024-12-16 12:29:08.990149] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.817 [2024-12-16 12:29:08.990195] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.817 [2024-12-16 12:29:09.062131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.817 [2024-12-16 12:29:09.101653] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.817 [2024-12-16 12:29:09.101688] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.817 [2024-12-16 12:29:09.101696] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.817 [2024-12-16 12:29:09.101702] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.817 [2024-12-16 12:29:09.101707] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.817 [2024-12-16 12:29:09.101832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.817 [2024-12-16 12:29:09.101938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.817 [2024-12-16 12:29:09.101940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:43.817 [2024-12-16 12:29:09.401066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.817 [2024-12-16 12:29:09.812738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.817 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.076 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:44.336 Malloc0 00:07:44.336 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:44.595 Delay0 00:07:44.595 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.595 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:44.854 NULL1 00:07:44.854 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:45.113 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=175067 00:07:45.113 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:45.113 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:45.113 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.373 Read completed with error (sct=0, sc=11) 00:07:45.373 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.632 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:45.632 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:45.632 true 00:07:45.632 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:45.632 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.570 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.830 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:46.830 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:46.830 true 00:07:46.830 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:46.830 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.089 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.348 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:47.348 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:47.607 true 00:07:47.607 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:47.607 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.607 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.867 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:47.867 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:48.126 true 00:07:48.126 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:48.126 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.064 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.064 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:49.064 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:49.323 true 00:07:49.323 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:49.323 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.581 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.841 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:49.841 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:49.841 true 00:07:49.841 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:49.841 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.219 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:51.219 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:51.478 true 00:07:51.478 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:51.478 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.413 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.413 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:52.413 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:52.673 true 00:07:52.673 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:52.673 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.932 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.191 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:53.191 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:53.191 true 00:07:53.191 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:53.191 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.567 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.826 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:54.826 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:54.826 true 00:07:54.826 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:54.826 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.763 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.022 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:56.022 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:56.022 true 00:07:56.022 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:56.022 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.281 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.540 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:56.540 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:56.799 true 00:07:56.799 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:56.799 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.737 12:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.996 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:57.996 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:58.255 true 00:07:58.255 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:58.255 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.193 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.193 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:59.193 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:59.456 true 00:07:59.456 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:59.456 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.715 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.973 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:59.973 12:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:59.973 true 00:07:59.973 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:07:59.973 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.351 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.351 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:01.351 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:01.610 true 00:08:01.610 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:01.610 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.548 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.548 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:02.548 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:02.808 true 00:08:02.808 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:02.808 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.067 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.067 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:03.067 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:03.326 true 00:08:03.326 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:03.326 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.705 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.705 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:04.705 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:04.964 true 00:08:04.964 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:04.964 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.901 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.901 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:05.901 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:06.160 true 00:08:06.160 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:06.160 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.418 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.418 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:06.418 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:06.676 true 00:08:06.676 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:06.676 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.054 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.054 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:08.054 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:08.323 true 00:08:08.323 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:08.323 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.263 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.263 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:09.263 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:09.522 true 00:08:09.522 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:09.522 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.781 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.781 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:09.781 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:10.040 true 00:08:10.040 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:10.040 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.418 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.418 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:11.418 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:11.677 true 00:08:11.677 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:11.677 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.614 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.614 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:12.614 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:12.873 true 00:08:12.873 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:12.873 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.132 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.132 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:13.132 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:13.391 true 00:08:13.391 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:13.391 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.769 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.769 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:14.769 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:15.028 true 00:08:15.028 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:15.028 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.966 Initializing NVMe Controllers 00:08:15.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:15.966 Controller IO queue size 128, less than required. 00:08:15.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.966 Controller IO queue size 128, less than required. 00:08:15.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:15.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:15.966 Initialization complete. Launching workers. 00:08:15.966 ======================================================== 00:08:15.966 Latency(us) 00:08:15.966 Device Information : IOPS MiB/s Average min max 00:08:15.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2337.30 1.14 38625.38 1979.21 1031560.66 00:08:15.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17691.23 8.64 7234.90 2139.55 443376.04 00:08:15.966 ======================================================== 00:08:15.966 Total : 20028.53 9.78 10898.12 1979.21 1031560.66 00:08:15.966 00:08:15.966 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.966 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:15.966 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:16.225 true 00:08:16.225 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 175067 00:08:16.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (175067) - No such process 00:08:16.225 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 175067 00:08:16.225 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.484 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:16.743 null0 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.743 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:17.001 null1 00:08:17.001 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.001 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.001 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:17.260 null2 00:08:17.260 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.260 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.260 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:17.520 null3 00:08:17.520 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.520 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.520 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:17.520 null4 00:08:17.520 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.520 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.520 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:17.779 null5 00:08:17.779 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.779 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.779 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:18.038 null6 00:08:18.038 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.038 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.038 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:18.298 null7 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:18.298 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 180603 180605 180606 180608 180610 180612 180614 180616 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.299 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.558 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.559 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.818 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.077 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.337 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.596 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.856 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.116 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.377 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.637 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.638 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.638 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.638 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.638 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.897 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.157 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.417 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.676 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.936 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.195 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.454 rmmod nvme_tcp 00:08:22.454 rmmod nvme_fabrics 00:08:22.454 rmmod nvme_keyring 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 174591 ']' 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 174591 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 174591 ']' 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 174591 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 174591 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 174591' 00:08:22.454 killing process with pid 174591 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 174591 00:08:22.454 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 174591 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.713 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.253 00:08:25.253 real 0m47.923s 00:08:25.253 user 3m16.124s 00:08:25.253 sys 0m14.673s 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.253 ************************************ 00:08:25.253 END TEST nvmf_ns_hotplug_stress 00:08:25.253 ************************************ 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.253 ************************************ 00:08:25.253 START TEST nvmf_delete_subsystem 00:08:25.253 ************************************ 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:25.253 * Looking for test storage... 00:08:25.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:25.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.253 --rc genhtml_branch_coverage=1 00:08:25.253 --rc genhtml_function_coverage=1 00:08:25.253 --rc genhtml_legend=1 00:08:25.253 --rc geninfo_all_blocks=1 00:08:25.253 --rc geninfo_unexecuted_blocks=1 00:08:25.253 00:08:25.253 ' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:25.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.253 --rc genhtml_branch_coverage=1 00:08:25.253 --rc genhtml_function_coverage=1 00:08:25.253 --rc genhtml_legend=1 00:08:25.253 --rc geninfo_all_blocks=1 00:08:25.253 --rc geninfo_unexecuted_blocks=1 00:08:25.253 00:08:25.253 ' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:25.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.253 --rc genhtml_branch_coverage=1 00:08:25.253 --rc genhtml_function_coverage=1 00:08:25.253 --rc genhtml_legend=1 00:08:25.253 --rc geninfo_all_blocks=1 00:08:25.253 --rc geninfo_unexecuted_blocks=1 00:08:25.253 00:08:25.253 ' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:25.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.253 --rc genhtml_branch_coverage=1 00:08:25.253 --rc genhtml_function_coverage=1 00:08:25.253 --rc genhtml_legend=1 00:08:25.253 --rc geninfo_all_blocks=1 00:08:25.253 --rc geninfo_unexecuted_blocks=1 00:08:25.253 00:08:25.253 ' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.253 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.254 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:31.829 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:31.829 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:31.829 Found net devices under 0000:af:00.0: cvl_0_0 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:31.829 Found net devices under 0000:af:00.1: cvl_0_1 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:08:31.829 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:08:31.830 00:08:31.830 --- 10.0.0.2 ping statistics --- 00:08:31.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.830 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:31.830 00:08:31.830 --- 10.0.0.1 ping statistics --- 00:08:31.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.830 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=184975 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 184975 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 184975 ']' 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.830 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 [2024-12-16 12:29:57.023053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:31.830 [2024-12-16 12:29:57.023096] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.830 [2024-12-16 12:29:57.092966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.830 [2024-12-16 12:29:57.131045] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.830 [2024-12-16 12:29:57.131087] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.830 [2024-12-16 12:29:57.131094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.830 [2024-12-16 12:29:57.131100] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.830 [2024-12-16 12:29:57.131105] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.830 [2024-12-16 12:29:57.131238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.830 [2024-12-16 12:29:57.131238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 [2024-12-16 12:29:57.268916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 [2024-12-16 12:29:57.289148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 NULL1 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 Delay0 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=185041 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:31.830 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:31.831 [2024-12-16 12:29:57.390075] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:33.737 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.737 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.737 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 starting I/O failed: -6 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 starting I/O failed: -6 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 starting I/O failed: -6 00:08:33.737 Write completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.737 starting I/O failed: -6 00:08:33.737 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 [2024-12-16 12:29:59.520070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0a70 is same with the state(6) to be set 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 [2024-12-16 12:29:59.520573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0c50 is same with the state(6) to be set 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Read completed with error (sct=0, sc=8) 00:08:33.738 Write completed with error (sct=0, sc=8) 00:08:33.738 starting I/O failed: -6 00:08:33.738 starting I/O failed: -6 00:08:33.738 starting I/O failed: -6 00:08:33.738 starting I/O failed: -6 00:08:33.738 starting I/O failed: -6 00:08:34.675 [2024-12-16 12:30:00.482928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cfb20 is same with the state(6) to be set 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 [2024-12-16 12:30:00.523762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1ed0 is same with the state(6) to be set 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 [2024-12-16 12:30:00.524149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8ff800cfe0 is same with the state(6) to be set 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 [2024-12-16 12:30:00.524284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8ff800d780 is same with the state(6) to be set 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Read completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.675 Write completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Write completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 Read completed with error (sct=0, sc=8) 00:08:34.676 [2024-12-16 12:30:00.525237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8ff8000c00 is same with the state(6) to be set 00:08:34.676 Initializing NVMe Controllers 00:08:34.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:34.676 Controller IO queue size 128, less than required. 00:08:34.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:34.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:34.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:34.676 Initialization complete. Launching workers. 00:08:34.676 ======================================================== 00:08:34.676 Latency(us) 00:08:34.676 Device Information : IOPS MiB/s Average min max 00:08:34.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 150.09 0.07 898895.97 288.66 1009495.69 00:08:34.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.53 0.08 1092269.47 330.77 2001911.59 00:08:34.676 ======================================================== 00:08:34.676 Total : 309.62 0.15 998531.43 288.66 2001911.59 00:08:34.676 00:08:34.676 [2024-12-16 12:30:00.525839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cfb20 (9): Bad file descriptor 00:08:34.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:34.676 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.676 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:34.676 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 185041 00:08:34.676 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 185041 00:08:35.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (185041) - No such process 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 185041 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 185041 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 185041 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 [2024-12-16 12:30:01.052510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=185750 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:35.244 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:35.245 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:35.245 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.245 [2024-12-16 12:30:01.126650] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:35.811 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.811 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:35.811 12:30:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.070 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.071 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:36.071 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.638 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.638 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:36.638 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:37.207 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.207 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:37.207 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:37.775 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.775 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:37.775 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.034 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.034 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:38.034 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.293 Initializing NVMe Controllers 00:08:38.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:38.293 Controller IO queue size 128, less than required. 00:08:38.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:38.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:38.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:38.293 Initialization complete. Launching workers. 00:08:38.293 ======================================================== 00:08:38.293 Latency(us) 00:08:38.293 Device Information : IOPS MiB/s Average min max 00:08:38.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002101.64 1000168.21 1006466.36 00:08:38.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003954.44 1000130.27 1041026.84 00:08:38.293 ======================================================== 00:08:38.293 Total : 256.00 0.12 1003028.04 1000130.27 1041026.84 00:08:38.293 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 185750 00:08:38.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (185750) - No such process 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 185750 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:38.551 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.552 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.552 rmmod nvme_tcp 00:08:38.811 rmmod nvme_fabrics 00:08:38.811 rmmod nvme_keyring 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 184975 ']' 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 184975 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 184975 ']' 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 184975 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 184975 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 184975' 00:08:38.811 killing process with pid 184975 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 184975 00:08:38.811 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 184975 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.070 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.977 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.977 00:08:40.977 real 0m16.217s 00:08:40.977 user 0m29.392s 00:08:40.977 sys 0m5.416s 00:08:40.977 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.977 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.977 ************************************ 00:08:40.977 END TEST nvmf_delete_subsystem 00:08:40.977 ************************************ 00:08:40.977 12:30:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:40.977 12:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:40.977 12:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.977 12:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.237 ************************************ 00:08:41.237 START TEST nvmf_host_management 00:08:41.237 ************************************ 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:41.237 * Looking for test storage... 00:08:41.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.237 --rc genhtml_branch_coverage=1 00:08:41.237 --rc genhtml_function_coverage=1 00:08:41.237 --rc genhtml_legend=1 00:08:41.237 --rc geninfo_all_blocks=1 00:08:41.237 --rc geninfo_unexecuted_blocks=1 00:08:41.237 00:08:41.237 ' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.237 --rc genhtml_branch_coverage=1 00:08:41.237 --rc genhtml_function_coverage=1 00:08:41.237 --rc genhtml_legend=1 00:08:41.237 --rc geninfo_all_blocks=1 00:08:41.237 --rc geninfo_unexecuted_blocks=1 00:08:41.237 00:08:41.237 ' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.237 --rc genhtml_branch_coverage=1 00:08:41.237 --rc genhtml_function_coverage=1 00:08:41.237 --rc genhtml_legend=1 00:08:41.237 --rc geninfo_all_blocks=1 00:08:41.237 --rc geninfo_unexecuted_blocks=1 00:08:41.237 00:08:41.237 ' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.237 --rc genhtml_branch_coverage=1 00:08:41.237 --rc genhtml_function_coverage=1 00:08:41.237 --rc genhtml_legend=1 00:08:41.237 --rc geninfo_all_blocks=1 00:08:41.237 --rc geninfo_unexecuted_blocks=1 00:08:41.237 00:08:41.237 ' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.237 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.238 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:47.823 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:47.823 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:47.823 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:47.824 Found net devices under 0000:af:00.0: cvl_0_0 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:47.824 Found net devices under 0000:af:00.1: cvl_0_1 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.824 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:08:47.824 00:08:47.824 --- 10.0.0.2 ping statistics --- 00:08:47.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.824 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:47.824 00:08:47.824 --- 10.0.0.1 ping statistics --- 00:08:47.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.824 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=190380 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 190380 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 190380 ']' 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.824 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.824 [2024-12-16 12:30:13.291230] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:47.824 [2024-12-16 12:30:13.291279] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.824 [2024-12-16 12:30:13.364868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.824 [2024-12-16 12:30:13.406118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.824 [2024-12-16 12:30:13.406154] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.824 [2024-12-16 12:30:13.406161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.824 [2024-12-16 12:30:13.406167] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.824 [2024-12-16 12:30:13.406172] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.824 [2024-12-16 12:30:13.406296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.824 [2024-12-16 12:30:13.406400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.824 [2024-12-16 12:30:13.406505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.825 [2024-12-16 12:30:13.406507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.825 [2024-12-16 12:30:13.549073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.825 Malloc0 00:08:47.825 [2024-12-16 12:30:13.608381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=190432 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 190432 /var/tmp/bdevperf.sock 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 190432 ']' 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:47.825 { 00:08:47.825 "params": { 00:08:47.825 "name": "Nvme$subsystem", 00:08:47.825 "trtype": "$TEST_TRANSPORT", 00:08:47.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.825 "adrfam": "ipv4", 00:08:47.825 "trsvcid": "$NVMF_PORT", 00:08:47.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.825 "hdgst": ${hdgst:-false}, 00:08:47.825 "ddgst": ${ddgst:-false} 00:08:47.825 }, 00:08:47.825 "method": "bdev_nvme_attach_controller" 00:08:47.825 } 00:08:47.825 EOF 00:08:47.825 )") 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:47.825 12:30:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:47.825 "params": { 00:08:47.825 "name": "Nvme0", 00:08:47.825 "trtype": "tcp", 00:08:47.825 "traddr": "10.0.0.2", 00:08:47.825 "adrfam": "ipv4", 00:08:47.825 "trsvcid": "4420", 00:08:47.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:47.825 "hdgst": false, 00:08:47.825 "ddgst": false 00:08:47.825 }, 00:08:47.825 "method": "bdev_nvme_attach_controller" 00:08:47.825 }' 00:08:47.825 [2024-12-16 12:30:13.704691] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:47.825 [2024-12-16 12:30:13.704733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190432 ] 00:08:47.825 [2024-12-16 12:30:13.772175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.825 [2024-12-16 12:30:13.811395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.085 Running I/O for 10 seconds... 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=99 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 99 -ge 100 ']' 00:08:48.085 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.346 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.346 [2024-12-16 12:30:14.407169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.346 [2024-12-16 12:30:14.407689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.346 [2024-12-16 12:30:14.407695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.407988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.407996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.347 [2024-12-16 12:30:14.408153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.347 [2024-12-16 12:30:14.408218] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1709070 was disconnected and freed. reset controller. 00:08:48.347 [2024-12-16 12:30:14.409119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:48.607 task offset: 104320 on job bdev=Nvme0n1 fails 00:08:48.607 00:08:48.607 Latency(us) 00:08:48.607 [2024-12-16T11:30:14.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.607 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:48.607 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:48.607 Verification LBA range: start 0x0 length 0x400 00:08:48.607 Nvme0n1 : 0.40 1908.10 119.26 159.01 0.00 30140.44 1513.57 26838.55 00:08:48.607 [2024-12-16T11:30:14.674Z] =================================================================================================================== 00:08:48.607 [2024-12-16T11:30:14.674Z] Total : 1908.10 119.26 159.01 0.00 30140.44 1513.57 26838.55 00:08:48.607 [2024-12-16 12:30:14.411474] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.607 [2024-12-16 12:30:14.411494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14efe90 (9): Bad file descriptor 00:08:48.607 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.607 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:48.607 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.607 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.607 [2024-12-16 12:30:14.422771] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:48.607 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.607 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 190432 00:08:49.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (190432) - No such process 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:49.542 { 00:08:49.542 "params": { 00:08:49.542 "name": "Nvme$subsystem", 00:08:49.542 "trtype": "$TEST_TRANSPORT", 00:08:49.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.542 "adrfam": "ipv4", 00:08:49.542 "trsvcid": "$NVMF_PORT", 00:08:49.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.542 "hdgst": ${hdgst:-false}, 00:08:49.542 "ddgst": ${ddgst:-false} 00:08:49.542 }, 00:08:49.542 "method": "bdev_nvme_attach_controller" 00:08:49.542 } 00:08:49.542 EOF 00:08:49.542 )") 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:49.542 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:49.542 "params": { 00:08:49.542 "name": "Nvme0", 00:08:49.542 "trtype": "tcp", 00:08:49.542 "traddr": "10.0.0.2", 00:08:49.542 "adrfam": "ipv4", 00:08:49.542 "trsvcid": "4420", 00:08:49.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:49.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:49.542 "hdgst": false, 00:08:49.542 "ddgst": false 00:08:49.542 }, 00:08:49.542 "method": "bdev_nvme_attach_controller" 00:08:49.542 }' 00:08:49.542 [2024-12-16 12:30:15.480289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.542 [2024-12-16 12:30:15.480336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190847 ] 00:08:49.542 [2024-12-16 12:30:15.550641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.542 [2024-12-16 12:30:15.587926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.800 Running I/O for 1 seconds... 00:08:50.737 2019.00 IOPS, 126.19 MiB/s 00:08:50.737 Latency(us) 00:08:50.737 [2024-12-16T11:30:16.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.737 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:50.737 Verification LBA range: start 0x0 length 0x400 00:08:50.737 Nvme0n1 : 1.01 2061.45 128.84 0.00 0.00 30455.27 2559.02 26588.89 00:08:50.737 [2024-12-16T11:30:16.804Z] =================================================================================================================== 00:08:50.737 [2024-12-16T11:30:16.804Z] Total : 2061.45 128.84 0.00 0.00 30455.27 2559.02 26588.89 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.996 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.996 rmmod nvme_tcp 00:08:50.996 rmmod nvme_fabrics 00:08:50.996 rmmod nvme_keyring 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 190380 ']' 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 190380 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 190380 ']' 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 190380 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.996 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 190380 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 190380' 00:08:51.255 killing process with pid 190380 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 190380 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 190380 00:08:51.255 [2024-12-16 12:30:17.279927] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.255 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:53.797 00:08:53.797 real 0m12.313s 00:08:53.797 user 0m19.370s 00:08:53.797 sys 0m5.548s 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.797 ************************************ 00:08:53.797 END TEST nvmf_host_management 00:08:53.797 ************************************ 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.797 ************************************ 00:08:53.797 START TEST nvmf_lvol 00:08:53.797 ************************************ 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.797 * Looking for test storage... 00:08:53.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.797 --rc genhtml_branch_coverage=1 00:08:53.797 --rc genhtml_function_coverage=1 00:08:53.797 --rc genhtml_legend=1 00:08:53.797 --rc geninfo_all_blocks=1 00:08:53.797 --rc geninfo_unexecuted_blocks=1 00:08:53.797 00:08:53.797 ' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.797 --rc genhtml_branch_coverage=1 00:08:53.797 --rc genhtml_function_coverage=1 00:08:53.797 --rc genhtml_legend=1 00:08:53.797 --rc geninfo_all_blocks=1 00:08:53.797 --rc geninfo_unexecuted_blocks=1 00:08:53.797 00:08:53.797 ' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.797 --rc genhtml_branch_coverage=1 00:08:53.797 --rc genhtml_function_coverage=1 00:08:53.797 --rc genhtml_legend=1 00:08:53.797 --rc geninfo_all_blocks=1 00:08:53.797 --rc geninfo_unexecuted_blocks=1 00:08:53.797 00:08:53.797 ' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.797 --rc genhtml_branch_coverage=1 00:08:53.797 --rc genhtml_function_coverage=1 00:08:53.797 --rc genhtml_legend=1 00:08:53.797 --rc geninfo_all_blocks=1 00:08:53.797 --rc geninfo_unexecuted_blocks=1 00:08:53.797 00:08:53.797 ' 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:53.797 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.798 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:00.373 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:00.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:00.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:00.374 Found net devices under 0000:af:00.0: cvl_0_0 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:00.374 Found net devices under 0000:af:00.1: cvl_0_1 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:09:00.374 00:09:00.374 --- 10.0.0.2 ping statistics --- 00:09:00.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.374 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:09:00.374 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:09:00.375 00:09:00.375 --- 10.0.0.1 ping statistics --- 00:09:00.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.375 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=194631 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 194631 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 194631 ']' 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 [2024-12-16 12:30:25.762130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.375 [2024-12-16 12:30:25.762194] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.375 [2024-12-16 12:30:25.836218] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.375 [2024-12-16 12:30:25.877007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.375 [2024-12-16 12:30:25.877043] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.375 [2024-12-16 12:30:25.877050] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.375 [2024-12-16 12:30:25.877056] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.375 [2024-12-16 12:30:25.877061] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.375 [2024-12-16 12:30:25.877127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.375 [2024-12-16 12:30:25.877204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.375 [2024-12-16 12:30:25.877205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.375 12:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.375 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.375 [2024-12-16 12:30:26.176251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.375 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.635 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:00.635 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.635 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:00.635 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:00.894 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:01.154 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2506e67f-008b-462c-8512-155e86e0f7b1 00:09:01.154 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2506e67f-008b-462c-8512-155e86e0f7b1 lvol 20 00:09:01.413 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=52ef4ce3-fda0-4d5d-8216-20df0b264b7b 00:09:01.413 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.413 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52ef4ce3-fda0-4d5d-8216-20df0b264b7b 00:09:01.672 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.932 [2024-12-16 12:30:27.847507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.932 12:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.191 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=195110 00:09:02.191 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:02.191 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:03.129 12:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 52ef4ce3-fda0-4d5d-8216-20df0b264b7b MY_SNAPSHOT 00:09:03.389 12:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1fc150f3-fde4-4ea3-90f7-3b075cdf9479 00:09:03.389 12:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 52ef4ce3-fda0-4d5d-8216-20df0b264b7b 30 00:09:03.649 12:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1fc150f3-fde4-4ea3-90f7-3b075cdf9479 MY_CLONE 00:09:03.908 12:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0401a607-65c0-41b0-bcba-ae7805c6841c 00:09:03.908 12:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0401a607-65c0-41b0-bcba-ae7805c6841c 00:09:04.478 12:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 195110 00:09:12.607 Initializing NVMe Controllers 00:09:12.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:12.607 Controller IO queue size 128, less than required. 00:09:12.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:12.607 Initialization complete. Launching workers. 00:09:12.607 ======================================================== 00:09:12.607 Latency(us) 00:09:12.607 Device Information : IOPS MiB/s Average min max 00:09:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12471.30 48.72 10269.23 1277.04 60378.87 00:09:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12342.50 48.21 10371.07 3506.86 52395.93 00:09:12.607 ======================================================== 00:09:12.607 Total : 24813.80 96.93 10319.89 1277.04 60378.87 00:09:12.607 00:09:12.607 12:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.867 12:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52ef4ce3-fda0-4d5d-8216-20df0b264b7b 00:09:12.867 12:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2506e67f-008b-462c-8512-155e86e0f7b1 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.126 rmmod nvme_tcp 00:09:13.126 rmmod nvme_fabrics 00:09:13.126 rmmod nvme_keyring 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 194631 ']' 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 194631 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 194631 ']' 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 194631 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.126 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 194631 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 194631' 00:09:13.385 killing process with pid 194631 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 194631 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 194631 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:13.385 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:13.386 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:13.386 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:13.386 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:13.386 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:13.644 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.644 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.644 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.644 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.644 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.552 00:09:15.552 real 0m22.058s 00:09:15.552 user 1m3.539s 00:09:15.552 sys 0m7.390s 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:15.552 ************************************ 00:09:15.552 END TEST nvmf_lvol 00:09:15.552 ************************************ 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.552 ************************************ 00:09:15.552 START TEST nvmf_lvs_grow 00:09:15.552 ************************************ 00:09:15.552 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:15.813 * Looking for test storage... 00:09:15.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:15.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.813 --rc genhtml_branch_coverage=1 00:09:15.813 --rc genhtml_function_coverage=1 00:09:15.813 --rc genhtml_legend=1 00:09:15.813 --rc geninfo_all_blocks=1 00:09:15.813 --rc geninfo_unexecuted_blocks=1 00:09:15.813 00:09:15.813 ' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:15.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.813 --rc genhtml_branch_coverage=1 00:09:15.813 --rc genhtml_function_coverage=1 00:09:15.813 --rc genhtml_legend=1 00:09:15.813 --rc geninfo_all_blocks=1 00:09:15.813 --rc geninfo_unexecuted_blocks=1 00:09:15.813 00:09:15.813 ' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:15.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.813 --rc genhtml_branch_coverage=1 00:09:15.813 --rc genhtml_function_coverage=1 00:09:15.813 --rc genhtml_legend=1 00:09:15.813 --rc geninfo_all_blocks=1 00:09:15.813 --rc geninfo_unexecuted_blocks=1 00:09:15.813 00:09:15.813 ' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:15.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.813 --rc genhtml_branch_coverage=1 00:09:15.813 --rc genhtml_function_coverage=1 00:09:15.813 --rc genhtml_legend=1 00:09:15.813 --rc geninfo_all_blocks=1 00:09:15.813 --rc geninfo_unexecuted_blocks=1 00:09:15.813 00:09:15.813 ' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.813 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.814 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:22.390 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:22.390 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:22.390 Found net devices under 0000:af:00.0: cvl_0_0 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:22.390 Found net devices under 0000:af:00.1: cvl_0_1 00:09:22.390 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:09:22.391 00:09:22.391 --- 10.0.0.2 ping statistics --- 00:09:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.391 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:22.391 00:09:22.391 --- 10.0.0.1 ping statistics --- 00:09:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.391 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=200440 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 200440 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 200440 ']' 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 [2024-12-16 12:30:47.798373] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:22.391 [2024-12-16 12:30:47.798421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.391 [2024-12-16 12:30:47.870425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.391 [2024-12-16 12:30:47.909635] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.391 [2024-12-16 12:30:47.909675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.391 [2024-12-16 12:30:47.909682] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.391 [2024-12-16 12:30:47.909687] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.391 [2024-12-16 12:30:47.909693] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.391 [2024-12-16 12:30:47.909709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.391 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:22.391 [2024-12-16 12:30:48.196272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 ************************************ 00:09:22.391 START TEST lvs_grow_clean 00:09:22.391 ************************************ 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.391 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:22.651 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:22.651 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:22.651 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:22.651 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:22.651 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:22.910 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:22.910 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:22.910 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7044b946-310e-4f6e-b1df-00c24c4e0314 lvol 150 00:09:23.169 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a7d91039-b1fa-44be-bf90-2b678557b03b 00:09:23.169 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:23.169 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:23.169 [2024-12-16 12:30:49.226945] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:23.169 [2024-12-16 12:30:49.226991] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:23.169 true 00:09:23.428 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:23.428 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:23.428 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:23.428 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:23.687 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a7d91039-b1fa-44be-bf90-2b678557b03b 00:09:23.947 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:23.947 [2024-12-16 12:30:49.949124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.947 12:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=200934 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 200934 /var/tmp/bdevperf.sock 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 200934 ']' 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:24.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.206 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:24.206 [2024-12-16 12:30:50.221122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:24.206 [2024-12-16 12:30:50.221172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200934 ] 00:09:24.465 [2024-12-16 12:30:50.290297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.465 [2024-12-16 12:30:50.328877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.465 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.465 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:24.465 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:24.724 Nvme0n1 00:09:24.724 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:24.983 [ 00:09:24.983 { 00:09:24.983 "name": "Nvme0n1", 00:09:24.983 "aliases": [ 00:09:24.984 "a7d91039-b1fa-44be-bf90-2b678557b03b" 00:09:24.984 ], 00:09:24.984 "product_name": "NVMe disk", 00:09:24.984 "block_size": 4096, 00:09:24.984 "num_blocks": 38912, 00:09:24.984 "uuid": "a7d91039-b1fa-44be-bf90-2b678557b03b", 00:09:24.984 "numa_id": 1, 00:09:24.984 "assigned_rate_limits": { 00:09:24.984 "rw_ios_per_sec": 0, 00:09:24.984 "rw_mbytes_per_sec": 0, 00:09:24.984 "r_mbytes_per_sec": 0, 00:09:24.984 "w_mbytes_per_sec": 0 00:09:24.984 }, 00:09:24.984 "claimed": false, 00:09:24.984 "zoned": false, 00:09:24.984 "supported_io_types": { 00:09:24.984 "read": true, 00:09:24.984 "write": true, 00:09:24.984 "unmap": true, 00:09:24.984 "flush": true, 00:09:24.984 "reset": true, 00:09:24.984 "nvme_admin": true, 00:09:24.984 "nvme_io": true, 00:09:24.984 "nvme_io_md": false, 00:09:24.984 "write_zeroes": true, 00:09:24.984 "zcopy": false, 00:09:24.984 "get_zone_info": false, 00:09:24.984 "zone_management": false, 00:09:24.984 "zone_append": false, 00:09:24.984 "compare": true, 00:09:24.984 "compare_and_write": true, 00:09:24.984 "abort": true, 00:09:24.984 "seek_hole": false, 00:09:24.984 "seek_data": false, 00:09:24.984 "copy": true, 00:09:24.984 "nvme_iov_md": false 00:09:24.984 }, 00:09:24.984 "memory_domains": [ 00:09:24.984 { 00:09:24.984 "dma_device_id": "system", 00:09:24.984 "dma_device_type": 1 00:09:24.984 } 00:09:24.984 ], 00:09:24.984 "driver_specific": { 00:09:24.984 "nvme": [ 00:09:24.984 { 00:09:24.984 "trid": { 00:09:24.984 "trtype": "TCP", 00:09:24.984 "adrfam": "IPv4", 00:09:24.984 "traddr": "10.0.0.2", 00:09:24.984 "trsvcid": "4420", 00:09:24.984 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:24.984 }, 00:09:24.984 "ctrlr_data": { 00:09:24.984 "cntlid": 1, 00:09:24.984 "vendor_id": "0x8086", 00:09:24.984 "model_number": "SPDK bdev Controller", 00:09:24.984 "serial_number": "SPDK0", 00:09:24.984 "firmware_revision": "24.09.1", 00:09:24.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:24.984 "oacs": { 00:09:24.984 "security": 0, 00:09:24.984 "format": 0, 00:09:24.984 "firmware": 0, 00:09:24.984 "ns_manage": 0 00:09:24.984 }, 00:09:24.984 "multi_ctrlr": true, 00:09:24.984 "ana_reporting": false 00:09:24.984 }, 00:09:24.984 "vs": { 00:09:24.984 "nvme_version": "1.3" 00:09:24.984 }, 00:09:24.984 "ns_data": { 00:09:24.984 "id": 1, 00:09:24.984 "can_share": true 00:09:24.984 } 00:09:24.984 } 00:09:24.984 ], 00:09:24.984 "mp_policy": "active_passive" 00:09:24.984 } 00:09:24.984 } 00:09:24.984 ] 00:09:24.984 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=200945 00:09:24.984 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:24.984 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:24.984 Running I/O for 10 seconds... 00:09:26.363 Latency(us) 00:09:26.363 [2024-12-16T11:30:52.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.363 Nvme0n1 : 1.00 23118.00 90.30 0.00 0.00 0.00 0.00 0.00 00:09:26.363 [2024-12-16T11:30:52.430Z] =================================================================================================================== 00:09:26.363 [2024-12-16T11:30:52.430Z] Total : 23118.00 90.30 0.00 0.00 0.00 0.00 0.00 00:09:26.363 00:09:26.932 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:27.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.191 Nvme0n1 : 2.00 23300.00 91.02 0.00 0.00 0.00 0.00 0.00 00:09:27.191 [2024-12-16T11:30:53.258Z] =================================================================================================================== 00:09:27.191 [2024-12-16T11:30:53.258Z] Total : 23300.00 91.02 0.00 0.00 0.00 0.00 0.00 00:09:27.191 00:09:27.191 true 00:09:27.191 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:27.191 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:27.450 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:27.450 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:27.450 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 200945 00:09:28.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.018 Nvme0n1 : 3.00 23385.67 91.35 0.00 0.00 0.00 0.00 0.00 00:09:28.018 [2024-12-16T11:30:54.085Z] =================================================================================================================== 00:09:28.018 [2024-12-16T11:30:54.085Z] Total : 23385.67 91.35 0.00 0.00 0.00 0.00 0.00 00:09:28.018 00:09:29.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.397 Nvme0n1 : 4.00 23481.00 91.72 0.00 0.00 0.00 0.00 0.00 00:09:29.397 [2024-12-16T11:30:55.464Z] =================================================================================================================== 00:09:29.397 [2024-12-16T11:30:55.464Z] Total : 23481.00 91.72 0.00 0.00 0.00 0.00 0.00 00:09:29.397 00:09:30.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.335 Nvme0n1 : 5.00 23523.60 91.89 0.00 0.00 0.00 0.00 0.00 00:09:30.335 [2024-12-16T11:30:56.402Z] =================================================================================================================== 00:09:30.335 [2024-12-16T11:30:56.402Z] Total : 23523.60 91.89 0.00 0.00 0.00 0.00 0.00 00:09:30.335 00:09:31.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.272 Nvme0n1 : 6.00 23557.00 92.02 0.00 0.00 0.00 0.00 0.00 00:09:31.272 [2024-12-16T11:30:57.339Z] =================================================================================================================== 00:09:31.272 [2024-12-16T11:30:57.339Z] Total : 23557.00 92.02 0.00 0.00 0.00 0.00 0.00 00:09:31.272 00:09:32.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.209 Nvme0n1 : 7.00 23592.29 92.16 0.00 0.00 0.00 0.00 0.00 00:09:32.209 [2024-12-16T11:30:58.276Z] =================================================================================================================== 00:09:32.209 [2024-12-16T11:30:58.276Z] Total : 23592.29 92.16 0.00 0.00 0.00 0.00 0.00 00:09:32.209 00:09:33.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.147 Nvme0n1 : 8.00 23613.75 92.24 0.00 0.00 0.00 0.00 0.00 00:09:33.147 [2024-12-16T11:30:59.214Z] =================================================================================================================== 00:09:33.147 [2024-12-16T11:30:59.214Z] Total : 23613.75 92.24 0.00 0.00 0.00 0.00 0.00 00:09:33.147 00:09:34.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.087 Nvme0n1 : 9.00 23619.44 92.26 0.00 0.00 0.00 0.00 0.00 00:09:34.087 [2024-12-16T11:31:00.154Z] =================================================================================================================== 00:09:34.087 [2024-12-16T11:31:00.154Z] Total : 23619.44 92.26 0.00 0.00 0.00 0.00 0.00 00:09:34.087 00:09:35.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.024 Nvme0n1 : 10.00 23612.30 92.24 0.00 0.00 0.00 0.00 0.00 00:09:35.024 [2024-12-16T11:31:01.091Z] =================================================================================================================== 00:09:35.024 [2024-12-16T11:31:01.091Z] Total : 23612.30 92.24 0.00 0.00 0.00 0.00 0.00 00:09:35.024 00:09:35.024 00:09:35.024 Latency(us) 00:09:35.024 [2024-12-16T11:31:01.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.024 Nvme0n1 : 10.00 23616.86 92.25 0.00 0.00 5416.82 2574.63 12670.29 00:09:35.024 [2024-12-16T11:31:01.091Z] =================================================================================================================== 00:09:35.024 [2024-12-16T11:31:01.091Z] Total : 23616.86 92.25 0.00 0.00 5416.82 2574.63 12670.29 00:09:35.024 { 00:09:35.024 "results": [ 00:09:35.024 { 00:09:35.024 "job": "Nvme0n1", 00:09:35.024 "core_mask": "0x2", 00:09:35.024 "workload": "randwrite", 00:09:35.024 "status": "finished", 00:09:35.024 "queue_depth": 128, 00:09:35.024 "io_size": 4096, 00:09:35.024 "runtime": 10.003487, 00:09:35.024 "iops": 23616.864799244504, 00:09:35.024 "mibps": 92.25337812204884, 00:09:35.024 "io_failed": 0, 00:09:35.025 "io_timeout": 0, 00:09:35.025 "avg_latency_us": 5416.82158282424, 00:09:35.025 "min_latency_us": 2574.6285714285714, 00:09:35.025 "max_latency_us": 12670.293333333333 00:09:35.025 } 00:09:35.025 ], 00:09:35.025 "core_count": 1 00:09:35.025 } 00:09:35.025 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 200934 00:09:35.025 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 200934 ']' 00:09:35.025 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 200934 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 200934 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 200934' 00:09:35.284 killing process with pid 200934 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 200934 00:09:35.284 Received shutdown signal, test time was about 10.000000 seconds 00:09:35.284 00:09:35.284 Latency(us) 00:09:35.284 [2024-12-16T11:31:01.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.284 [2024-12-16T11:31:01.351Z] =================================================================================================================== 00:09:35.284 [2024-12-16T11:31:01.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 200934 00:09:35.284 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.543 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:35.802 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:35.802 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:36.062 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:36.062 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:36.062 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.062 [2024-12-16 12:31:02.078815] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:36.321 request: 00:09:36.321 { 00:09:36.321 "uuid": "7044b946-310e-4f6e-b1df-00c24c4e0314", 00:09:36.321 "method": "bdev_lvol_get_lvstores", 00:09:36.321 "req_id": 1 00:09:36.321 } 00:09:36.321 Got JSON-RPC error response 00:09:36.321 response: 00:09:36.321 { 00:09:36.321 "code": -19, 00:09:36.321 "message": "No such device" 00:09:36.321 } 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.321 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.581 aio_bdev 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a7d91039-b1fa-44be-bf90-2b678557b03b 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a7d91039-b1fa-44be-bf90-2b678557b03b 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.581 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.840 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a7d91039-b1fa-44be-bf90-2b678557b03b -t 2000 00:09:36.840 [ 00:09:36.840 { 00:09:36.840 "name": "a7d91039-b1fa-44be-bf90-2b678557b03b", 00:09:36.840 "aliases": [ 00:09:36.840 "lvs/lvol" 00:09:36.840 ], 00:09:36.840 "product_name": "Logical Volume", 00:09:36.840 "block_size": 4096, 00:09:36.840 "num_blocks": 38912, 00:09:36.840 "uuid": "a7d91039-b1fa-44be-bf90-2b678557b03b", 00:09:36.840 "assigned_rate_limits": { 00:09:36.840 "rw_ios_per_sec": 0, 00:09:36.840 "rw_mbytes_per_sec": 0, 00:09:36.840 "r_mbytes_per_sec": 0, 00:09:36.840 "w_mbytes_per_sec": 0 00:09:36.840 }, 00:09:36.840 "claimed": false, 00:09:36.840 "zoned": false, 00:09:36.840 "supported_io_types": { 00:09:36.840 "read": true, 00:09:36.840 "write": true, 00:09:36.840 "unmap": true, 00:09:36.840 "flush": false, 00:09:36.840 "reset": true, 00:09:36.840 "nvme_admin": false, 00:09:36.840 "nvme_io": false, 00:09:36.840 "nvme_io_md": false, 00:09:36.840 "write_zeroes": true, 00:09:36.840 "zcopy": false, 00:09:36.840 "get_zone_info": false, 00:09:36.840 "zone_management": false, 00:09:36.840 "zone_append": false, 00:09:36.840 "compare": false, 00:09:36.840 "compare_and_write": false, 00:09:36.840 "abort": false, 00:09:36.840 "seek_hole": true, 00:09:36.840 "seek_data": true, 00:09:36.840 "copy": false, 00:09:36.840 "nvme_iov_md": false 00:09:36.840 }, 00:09:36.840 "driver_specific": { 00:09:36.840 "lvol": { 00:09:36.840 "lvol_store_uuid": "7044b946-310e-4f6e-b1df-00c24c4e0314", 00:09:36.840 "base_bdev": "aio_bdev", 00:09:36.840 "thin_provision": false, 00:09:36.840 "num_allocated_clusters": 38, 00:09:36.840 "snapshot": false, 00:09:36.840 "clone": false, 00:09:36.840 "esnap_clone": false 00:09:36.840 } 00:09:36.840 } 00:09:36.840 } 00:09:36.840 ] 00:09:36.840 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:36.840 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:36.840 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:37.100 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.100 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:37.100 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:37.359 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:37.359 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a7d91039-b1fa-44be-bf90-2b678557b03b 00:09:37.618 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7044b946-310e-4f6e-b1df-00c24c4e0314 00:09:37.618 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:37.877 00:09:37.877 real 0m15.615s 00:09:37.877 user 0m15.181s 00:09:37.877 sys 0m1.473s 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:37.877 ************************************ 00:09:37.877 END TEST lvs_grow_clean 00:09:37.877 ************************************ 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.877 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.136 ************************************ 00:09:38.136 START TEST lvs_grow_dirty 00:09:38.136 ************************************ 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.136 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.136 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:38.136 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:38.396 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:38.396 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:38.396 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:38.656 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:38.656 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:38.656 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 lvol 150 00:09:38.656 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:38.915 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.916 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:38.916 [2024-12-16 12:31:04.881947] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:38.916 [2024-12-16 12:31:04.881992] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:38.916 true 00:09:38.916 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:38.916 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:39.175 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:39.175 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:39.434 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:39.434 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:39.693 [2024-12-16 12:31:05.624160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.693 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=203508 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 203508 /var/tmp/bdevperf.sock 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 203508 ']' 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:39.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.952 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.952 [2024-12-16 12:31:05.850977] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:39.952 [2024-12-16 12:31:05.851020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203508 ] 00:09:39.952 [2024-12-16 12:31:05.918929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.952 [2024-12-16 12:31:05.957144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.212 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.212 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:40.212 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:40.471 Nvme0n1 00:09:40.471 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:40.471 [ 00:09:40.471 { 00:09:40.471 "name": "Nvme0n1", 00:09:40.471 "aliases": [ 00:09:40.471 "0ff985fc-b9a4-46b6-89c7-a367e117ab07" 00:09:40.471 ], 00:09:40.471 "product_name": "NVMe disk", 00:09:40.471 "block_size": 4096, 00:09:40.471 "num_blocks": 38912, 00:09:40.471 "uuid": "0ff985fc-b9a4-46b6-89c7-a367e117ab07", 00:09:40.471 "numa_id": 1, 00:09:40.471 "assigned_rate_limits": { 00:09:40.471 "rw_ios_per_sec": 0, 00:09:40.471 "rw_mbytes_per_sec": 0, 00:09:40.471 "r_mbytes_per_sec": 0, 00:09:40.471 "w_mbytes_per_sec": 0 00:09:40.471 }, 00:09:40.471 "claimed": false, 00:09:40.471 "zoned": false, 00:09:40.471 "supported_io_types": { 00:09:40.471 "read": true, 00:09:40.471 "write": true, 00:09:40.471 "unmap": true, 00:09:40.471 "flush": true, 00:09:40.471 "reset": true, 00:09:40.471 "nvme_admin": true, 00:09:40.471 "nvme_io": true, 00:09:40.471 "nvme_io_md": false, 00:09:40.471 "write_zeroes": true, 00:09:40.471 "zcopy": false, 00:09:40.471 "get_zone_info": false, 00:09:40.471 "zone_management": false, 00:09:40.471 "zone_append": false, 00:09:40.471 "compare": true, 00:09:40.471 "compare_and_write": true, 00:09:40.471 "abort": true, 00:09:40.471 "seek_hole": false, 00:09:40.471 "seek_data": false, 00:09:40.471 "copy": true, 00:09:40.471 "nvme_iov_md": false 00:09:40.471 }, 00:09:40.471 "memory_domains": [ 00:09:40.471 { 00:09:40.471 "dma_device_id": "system", 00:09:40.471 "dma_device_type": 1 00:09:40.471 } 00:09:40.471 ], 00:09:40.471 "driver_specific": { 00:09:40.471 "nvme": [ 00:09:40.471 { 00:09:40.471 "trid": { 00:09:40.471 "trtype": "TCP", 00:09:40.471 "adrfam": "IPv4", 00:09:40.471 "traddr": "10.0.0.2", 00:09:40.471 "trsvcid": "4420", 00:09:40.471 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:40.471 }, 00:09:40.471 "ctrlr_data": { 00:09:40.471 "cntlid": 1, 00:09:40.472 "vendor_id": "0x8086", 00:09:40.472 "model_number": "SPDK bdev Controller", 00:09:40.472 "serial_number": "SPDK0", 00:09:40.472 "firmware_revision": "24.09.1", 00:09:40.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:40.472 "oacs": { 00:09:40.472 "security": 0, 00:09:40.472 "format": 0, 00:09:40.472 "firmware": 0, 00:09:40.472 "ns_manage": 0 00:09:40.472 }, 00:09:40.472 "multi_ctrlr": true, 00:09:40.472 "ana_reporting": false 00:09:40.472 }, 00:09:40.472 "vs": { 00:09:40.472 "nvme_version": "1.3" 00:09:40.472 }, 00:09:40.472 "ns_data": { 00:09:40.472 "id": 1, 00:09:40.472 "can_share": true 00:09:40.472 } 00:09:40.472 } 00:09:40.472 ], 00:09:40.472 "mp_policy": "active_passive" 00:09:40.472 } 00:09:40.472 } 00:09:40.472 ] 00:09:40.472 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:40.472 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=203707 00:09:40.472 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:40.731 Running I/O for 10 seconds... 00:09:41.668 Latency(us) 00:09:41.668 [2024-12-16T11:31:07.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.668 Nvme0n1 : 1.00 23435.00 91.54 0.00 0.00 0.00 0.00 0.00 00:09:41.668 [2024-12-16T11:31:07.735Z] =================================================================================================================== 00:09:41.668 [2024-12-16T11:31:07.735Z] Total : 23435.00 91.54 0.00 0.00 0.00 0.00 0.00 00:09:41.668 00:09:42.605 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:42.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.605 Nvme0n1 : 2.00 23504.50 91.81 0.00 0.00 0.00 0.00 0.00 00:09:42.605 [2024-12-16T11:31:08.672Z] =================================================================================================================== 00:09:42.605 [2024-12-16T11:31:08.672Z] Total : 23504.50 91.81 0.00 0.00 0.00 0.00 0.00 00:09:42.605 00:09:42.865 true 00:09:42.865 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:42.865 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:42.865 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:42.865 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:42.865 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 203707 00:09:43.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.803 Nvme0n1 : 3.00 23526.33 91.90 0.00 0.00 0.00 0.00 0.00 00:09:43.803 [2024-12-16T11:31:09.870Z] =================================================================================================================== 00:09:43.803 [2024-12-16T11:31:09.870Z] Total : 23526.33 91.90 0.00 0.00 0.00 0.00 0.00 00:09:43.803 00:09:44.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.740 Nvme0n1 : 4.00 23493.25 91.77 0.00 0.00 0.00 0.00 0.00 00:09:44.740 [2024-12-16T11:31:10.807Z] =================================================================================================================== 00:09:44.740 [2024-12-16T11:31:10.807Z] Total : 23493.25 91.77 0.00 0.00 0.00 0.00 0.00 00:09:44.740 00:09:45.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.678 Nvme0n1 : 5.00 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:09:45.678 [2024-12-16T11:31:11.745Z] =================================================================================================================== 00:09:45.678 [2024-12-16T11:31:11.745Z] Total : 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:09:45.678 00:09:46.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.616 Nvme0n1 : 6.00 23617.33 92.26 0.00 0.00 0.00 0.00 0.00 00:09:46.616 [2024-12-16T11:31:12.683Z] =================================================================================================================== 00:09:46.616 [2024-12-16T11:31:12.683Z] Total : 23617.33 92.26 0.00 0.00 0.00 0.00 0.00 00:09:46.616 00:09:47.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.556 Nvme0n1 : 7.00 23654.29 92.40 0.00 0.00 0.00 0.00 0.00 00:09:47.556 [2024-12-16T11:31:13.623Z] =================================================================================================================== 00:09:47.556 [2024-12-16T11:31:13.623Z] Total : 23654.29 92.40 0.00 0.00 0.00 0.00 0.00 00:09:47.556 00:09:48.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.936 Nvme0n1 : 8.00 23682.38 92.51 0.00 0.00 0.00 0.00 0.00 00:09:48.936 [2024-12-16T11:31:15.003Z] =================================================================================================================== 00:09:48.936 [2024-12-16T11:31:15.003Z] Total : 23682.38 92.51 0.00 0.00 0.00 0.00 0.00 00:09:48.936 00:09:49.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.871 Nvme0n1 : 9.00 23711.67 92.62 0.00 0.00 0.00 0.00 0.00 00:09:49.871 [2024-12-16T11:31:15.938Z] =================================================================================================================== 00:09:49.871 [2024-12-16T11:31:15.938Z] Total : 23711.67 92.62 0.00 0.00 0.00 0.00 0.00 00:09:49.871 00:09:50.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.811 Nvme0n1 : 10.00 23730.90 92.70 0.00 0.00 0.00 0.00 0.00 00:09:50.811 [2024-12-16T11:31:16.878Z] =================================================================================================================== 00:09:50.811 [2024-12-16T11:31:16.878Z] Total : 23730.90 92.70 0.00 0.00 0.00 0.00 0.00 00:09:50.811 00:09:50.811 00:09:50.811 Latency(us) 00:09:50.811 [2024-12-16T11:31:16.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.811 Nvme0n1 : 10.00 23734.50 92.71 0.00 0.00 5389.90 3136.37 12233.39 00:09:50.811 [2024-12-16T11:31:16.878Z] =================================================================================================================== 00:09:50.811 [2024-12-16T11:31:16.878Z] Total : 23734.50 92.71 0.00 0.00 5389.90 3136.37 12233.39 00:09:50.811 { 00:09:50.811 "results": [ 00:09:50.811 { 00:09:50.811 "job": "Nvme0n1", 00:09:50.811 "core_mask": "0x2", 00:09:50.811 "workload": "randwrite", 00:09:50.811 "status": "finished", 00:09:50.811 "queue_depth": 128, 00:09:50.811 "io_size": 4096, 00:09:50.811 "runtime": 10.003876, 00:09:50.811 "iops": 23734.500507603254, 00:09:50.811 "mibps": 92.71289260782521, 00:09:50.811 "io_failed": 0, 00:09:50.811 "io_timeout": 0, 00:09:50.811 "avg_latency_us": 5389.897521760659, 00:09:50.812 "min_latency_us": 3136.365714285714, 00:09:50.812 "max_latency_us": 12233.386666666667 00:09:50.812 } 00:09:50.812 ], 00:09:50.812 "core_count": 1 00:09:50.812 } 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 203508 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 203508 ']' 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 203508 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 203508 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 203508' 00:09:50.812 killing process with pid 203508 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 203508 00:09:50.812 Received shutdown signal, test time was about 10.000000 seconds 00:09:50.812 00:09:50.812 Latency(us) 00:09:50.812 [2024-12-16T11:31:16.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.812 [2024-12-16T11:31:16.879Z] =================================================================================================================== 00:09:50.812 [2024-12-16T11:31:16.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 203508 00:09:50.812 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.071 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:51.331 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:51.331 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 200440 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 200440 00:09:51.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 200440 Killed "${NVMF_APP[@]}" "$@" 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=205540 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 205540 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 205540 ']' 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.591 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:51.591 [2024-12-16 12:31:17.544036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:51.591 [2024-12-16 12:31:17.544083] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.591 [2024-12-16 12:31:17.615940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.591 [2024-12-16 12:31:17.654494] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.591 [2024-12-16 12:31:17.654534] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.591 [2024-12-16 12:31:17.654541] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.591 [2024-12-16 12:31:17.654547] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.591 [2024-12-16 12:31:17.654552] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.591 [2024-12-16 12:31:17.654572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.851 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.110 [2024-12-16 12:31:17.949979] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:52.110 [2024-12-16 12:31:17.950057] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:52.110 [2024-12-16 12:31:17.950082] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.110 12:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:52.110 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ff985fc-b9a4-46b6-89c7-a367e117ab07 -t 2000 00:09:52.370 [ 00:09:52.370 { 00:09:52.370 "name": "0ff985fc-b9a4-46b6-89c7-a367e117ab07", 00:09:52.370 "aliases": [ 00:09:52.370 "lvs/lvol" 00:09:52.370 ], 00:09:52.370 "product_name": "Logical Volume", 00:09:52.370 "block_size": 4096, 00:09:52.370 "num_blocks": 38912, 00:09:52.370 "uuid": "0ff985fc-b9a4-46b6-89c7-a367e117ab07", 00:09:52.370 "assigned_rate_limits": { 00:09:52.370 "rw_ios_per_sec": 0, 00:09:52.370 "rw_mbytes_per_sec": 0, 00:09:52.370 "r_mbytes_per_sec": 0, 00:09:52.370 "w_mbytes_per_sec": 0 00:09:52.370 }, 00:09:52.370 "claimed": false, 00:09:52.370 "zoned": false, 00:09:52.370 "supported_io_types": { 00:09:52.370 "read": true, 00:09:52.370 "write": true, 00:09:52.370 "unmap": true, 00:09:52.370 "flush": false, 00:09:52.370 "reset": true, 00:09:52.370 "nvme_admin": false, 00:09:52.370 "nvme_io": false, 00:09:52.370 "nvme_io_md": false, 00:09:52.370 "write_zeroes": true, 00:09:52.370 "zcopy": false, 00:09:52.370 "get_zone_info": false, 00:09:52.370 "zone_management": false, 00:09:52.370 "zone_append": false, 00:09:52.370 "compare": false, 00:09:52.370 "compare_and_write": false, 00:09:52.370 "abort": false, 00:09:52.370 "seek_hole": true, 00:09:52.370 "seek_data": true, 00:09:52.370 "copy": false, 00:09:52.370 "nvme_iov_md": false 00:09:52.370 }, 00:09:52.370 "driver_specific": { 00:09:52.370 "lvol": { 00:09:52.370 "lvol_store_uuid": "dc5b35c8-960e-4d85-bb77-9b2ccb0684e8", 00:09:52.370 "base_bdev": "aio_bdev", 00:09:52.370 "thin_provision": false, 00:09:52.370 "num_allocated_clusters": 38, 00:09:52.370 "snapshot": false, 00:09:52.370 "clone": false, 00:09:52.370 "esnap_clone": false 00:09:52.370 } 00:09:52.370 } 00:09:52.370 } 00:09:52.370 ] 00:09:52.370 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:52.370 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:52.370 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:52.629 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:52.629 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:52.629 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:52.889 [2024-12-16 12:31:18.870752] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:52.889 12:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:53.149 request: 00:09:53.149 { 00:09:53.149 "uuid": "dc5b35c8-960e-4d85-bb77-9b2ccb0684e8", 00:09:53.149 "method": "bdev_lvol_get_lvstores", 00:09:53.149 "req_id": 1 00:09:53.149 } 00:09:53.149 Got JSON-RPC error response 00:09:53.149 response: 00:09:53.149 { 00:09:53.149 "code": -19, 00:09:53.149 "message": "No such device" 00:09:53.149 } 00:09:53.149 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:53.149 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.149 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:53.149 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.149 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:53.408 aio_bdev 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.408 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:53.668 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ff985fc-b9a4-46b6-89c7-a367e117ab07 -t 2000 00:09:53.668 [ 00:09:53.668 { 00:09:53.668 "name": "0ff985fc-b9a4-46b6-89c7-a367e117ab07", 00:09:53.668 "aliases": [ 00:09:53.668 "lvs/lvol" 00:09:53.668 ], 00:09:53.668 "product_name": "Logical Volume", 00:09:53.668 "block_size": 4096, 00:09:53.668 "num_blocks": 38912, 00:09:53.668 "uuid": "0ff985fc-b9a4-46b6-89c7-a367e117ab07", 00:09:53.668 "assigned_rate_limits": { 00:09:53.668 "rw_ios_per_sec": 0, 00:09:53.668 "rw_mbytes_per_sec": 0, 00:09:53.668 "r_mbytes_per_sec": 0, 00:09:53.668 "w_mbytes_per_sec": 0 00:09:53.668 }, 00:09:53.668 "claimed": false, 00:09:53.668 "zoned": false, 00:09:53.668 "supported_io_types": { 00:09:53.668 "read": true, 00:09:53.668 "write": true, 00:09:53.668 "unmap": true, 00:09:53.668 "flush": false, 00:09:53.668 "reset": true, 00:09:53.668 "nvme_admin": false, 00:09:53.668 "nvme_io": false, 00:09:53.668 "nvme_io_md": false, 00:09:53.668 "write_zeroes": true, 00:09:53.668 "zcopy": false, 00:09:53.668 "get_zone_info": false, 00:09:53.668 "zone_management": false, 00:09:53.668 "zone_append": false, 00:09:53.668 "compare": false, 00:09:53.668 "compare_and_write": false, 00:09:53.668 "abort": false, 00:09:53.668 "seek_hole": true, 00:09:53.668 "seek_data": true, 00:09:53.668 "copy": false, 00:09:53.668 "nvme_iov_md": false 00:09:53.668 }, 00:09:53.668 "driver_specific": { 00:09:53.668 "lvol": { 00:09:53.668 "lvol_store_uuid": "dc5b35c8-960e-4d85-bb77-9b2ccb0684e8", 00:09:53.668 "base_bdev": "aio_bdev", 00:09:53.668 "thin_provision": false, 00:09:53.668 "num_allocated_clusters": 38, 00:09:53.668 "snapshot": false, 00:09:53.668 "clone": false, 00:09:53.668 "esnap_clone": false 00:09:53.668 } 00:09:53.668 } 00:09:53.668 } 00:09:53.668 ] 00:09:53.668 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:53.668 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:53.668 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:53.927 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:53.927 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:53.927 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:54.186 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:54.186 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ff985fc-b9a4-46b6-89c7-a367e117ab07 00:09:54.186 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc5b35c8-960e-4d85-bb77-9b2ccb0684e8 00:09:54.445 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:54.704 00:09:54.704 real 0m16.720s 00:09:54.704 user 0m43.350s 00:09:54.704 sys 0m3.733s 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.704 ************************************ 00:09:54.704 END TEST lvs_grow_dirty 00:09:54.704 ************************************ 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:54.704 nvmf_trace.0 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.704 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.704 rmmod nvme_tcp 00:09:54.704 rmmod nvme_fabrics 00:09:54.964 rmmod nvme_keyring 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 205540 ']' 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 205540 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 205540 ']' 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 205540 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 205540 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 205540' 00:09:54.964 killing process with pid 205540 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 205540 00:09:54.964 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 205540 00:09:54.964 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:54.964 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:54.964 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:54.964 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:55.223 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.224 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.132 00:09:57.132 real 0m41.504s 00:09:57.132 user 1m4.072s 00:09:57.132 sys 0m10.090s 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 ************************************ 00:09:57.132 END TEST nvmf_lvs_grow 00:09:57.132 ************************************ 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 ************************************ 00:09:57.132 START TEST nvmf_bdev_io_wait 00:09:57.132 ************************************ 00:09:57.132 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:57.394 * Looking for test storage... 00:09:57.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.394 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.395 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:03.972 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:03.972 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:03.972 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:03.973 Found net devices under 0000:af:00.0: cvl_0_0 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:03.973 Found net devices under 0000:af:00.1: cvl_0_1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:10:03.973 00:10:03.973 --- 10.0.0.2 ping statistics --- 00:10:03.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.973 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:10:03.973 00:10:03.973 --- 10.0.0.1 ping statistics --- 00:10:03.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.973 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=209583 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 209583 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 209583 ']' 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 [2024-12-16 12:31:29.404047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.973 [2024-12-16 12:31:29.404098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.973 [2024-12-16 12:31:29.479297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.973 [2024-12-16 12:31:29.521483] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.973 [2024-12-16 12:31:29.521520] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.973 [2024-12-16 12:31:29.521527] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.973 [2024-12-16 12:31:29.521533] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.973 [2024-12-16 12:31:29.521538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.973 [2024-12-16 12:31:29.521593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.973 [2024-12-16 12:31:29.521612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.973 [2024-12-16 12:31:29.521704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.973 [2024-12-16 12:31:29.521705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 [2024-12-16 12:31:29.677465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 Malloc0 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 [2024-12-16 12:31:29.751538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=209667 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=209670 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.974 { 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme$subsystem", 00:10:03.974 "trtype": "$TEST_TRANSPORT", 00:10:03.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "$NVMF_PORT", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.974 "hdgst": ${hdgst:-false}, 00:10:03.974 "ddgst": ${ddgst:-false} 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 } 00:10:03.974 EOF 00:10:03.974 )") 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=209672 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.974 { 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme$subsystem", 00:10:03.974 "trtype": "$TEST_TRANSPORT", 00:10:03.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "$NVMF_PORT", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.974 "hdgst": ${hdgst:-false}, 00:10:03.974 "ddgst": ${ddgst:-false} 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 } 00:10:03.974 EOF 00:10:03.974 )") 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=209676 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.974 { 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme$subsystem", 00:10:03.974 "trtype": "$TEST_TRANSPORT", 00:10:03.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "$NVMF_PORT", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.974 "hdgst": ${hdgst:-false}, 00:10:03.974 "ddgst": ${ddgst:-false} 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 } 00:10:03.974 EOF 00:10:03.974 )") 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.974 { 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme$subsystem", 00:10:03.974 "trtype": "$TEST_TRANSPORT", 00:10:03.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "$NVMF_PORT", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.974 "hdgst": ${hdgst:-false}, 00:10:03.974 "ddgst": ${ddgst:-false} 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 } 00:10:03.974 EOF 00:10:03.974 )") 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 209667 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme1", 00:10:03.974 "trtype": "tcp", 00:10:03.974 "traddr": "10.0.0.2", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "4420", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.974 "hdgst": false, 00:10:03.974 "ddgst": false 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 }' 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme1", 00:10:03.974 "trtype": "tcp", 00:10:03.974 "traddr": "10.0.0.2", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "4420", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.974 "hdgst": false, 00:10:03.974 "ddgst": false 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 }' 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.974 "params": { 00:10:03.974 "name": "Nvme1", 00:10:03.974 "trtype": "tcp", 00:10:03.974 "traddr": "10.0.0.2", 00:10:03.974 "adrfam": "ipv4", 00:10:03.974 "trsvcid": "4420", 00:10:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.974 "hdgst": false, 00:10:03.974 "ddgst": false 00:10:03.974 }, 00:10:03.974 "method": "bdev_nvme_attach_controller" 00:10:03.974 }' 00:10:03.974 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:03.975 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.975 "params": { 00:10:03.975 "name": "Nvme1", 00:10:03.975 "trtype": "tcp", 00:10:03.975 "traddr": "10.0.0.2", 00:10:03.975 "adrfam": "ipv4", 00:10:03.975 "trsvcid": "4420", 00:10:03.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.975 "hdgst": false, 00:10:03.975 "ddgst": false 00:10:03.975 }, 00:10:03.975 "method": "bdev_nvme_attach_controller" 00:10:03.975 }' 00:10:03.975 [2024-12-16 12:31:29.802633] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.975 [2024-12-16 12:31:29.802685] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:03.975 [2024-12-16 12:31:29.806920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.975 [2024-12-16 12:31:29.806961] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:03.975 [2024-12-16 12:31:29.807406] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.975 [2024-12-16 12:31:29.807447] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:03.975 [2024-12-16 12:31:29.808814] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.975 [2024-12-16 12:31:29.808855] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:03.975 [2024-12-16 12:31:29.984768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.975 [2024-12-16 12:31:30.022278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.234 [2024-12-16 12:31:30.078845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.234 [2024-12-16 12:31:30.109214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.234 [2024-12-16 12:31:30.171089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.234 [2024-12-16 12:31:30.205268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:04.234 [2024-12-16 12:31:30.239123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.234 [2024-12-16 12:31:30.265709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.494 Running I/O for 1 seconds... 00:10:04.494 Running I/O for 1 seconds... 00:10:04.754 Running I/O for 1 seconds... 00:10:05.013 Running I/O for 1 seconds... 00:10:05.583 251352.00 IOPS, 981.84 MiB/s [2024-12-16T11:31:31.650Z] 10219.00 IOPS, 39.92 MiB/s 00:10:05.583 Latency(us) 00:10:05.583 [2024-12-16T11:31:31.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.583 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:05.583 Nvme1n1 : 1.00 250972.97 980.36 0.00 0.00 507.02 255.51 1521.37 00:10:05.583 [2024-12-16T11:31:31.650Z] =================================================================================================================== 00:10:05.583 [2024-12-16T11:31:31.650Z] Total : 250972.97 980.36 0.00 0.00 507.02 255.51 1521.37 00:10:05.583 00:10:05.583 Latency(us) 00:10:05.583 [2024-12-16T11:31:31.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.583 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:05.583 Nvme1n1 : 1.02 10182.16 39.77 0.00 0.00 12459.08 4556.31 22843.98 00:10:05.583 [2024-12-16T11:31:31.650Z] =================================================================================================================== 00:10:05.583 [2024-12-16T11:31:31.650Z] Total : 10182.16 39.77 0.00 0.00 12459.08 4556.31 22843.98 00:10:05.843 12:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 209670 00:10:05.843 9135.00 IOPS, 35.68 MiB/s 00:10:05.843 Latency(us) 00:10:05.843 [2024-12-16T11:31:31.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.843 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.843 Nvme1n1 : 1.01 9233.07 36.07 0.00 0.00 13830.48 3526.46 29834.48 00:10:05.843 [2024-12-16T11:31:31.910Z] =================================================================================================================== 00:10:05.843 [2024-12-16T11:31:31.910Z] Total : 9233.07 36.07 0.00 0.00 13830.48 3526.46 29834.48 00:10:05.843 11005.00 IOPS, 42.99 MiB/s 00:10:05.843 Latency(us) 00:10:05.843 [2024-12-16T11:31:31.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.843 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:05.843 Nvme1n1 : 1.01 11070.90 43.25 0.00 0.00 11525.99 4649.94 22344.66 00:10:05.843 [2024-12-16T11:31:31.910Z] =================================================================================================================== 00:10:05.843 [2024-12-16T11:31:31.910Z] Total : 11070.90 43.25 0.00 0.00 11525.99 4649.94 22344.66 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 209672 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 209676 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.103 rmmod nvme_tcp 00:10:06.103 rmmod nvme_fabrics 00:10:06.103 rmmod nvme_keyring 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 209583 ']' 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 209583 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 209583 ']' 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 209583 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.103 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209583 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209583' 00:10:06.363 killing process with pid 209583 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 209583 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 209583 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.363 12:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.904 00:10:08.904 real 0m11.263s 00:10:08.904 user 0m18.418s 00:10:08.904 sys 0m6.320s 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.904 ************************************ 00:10:08.904 END TEST nvmf_bdev_io_wait 00:10:08.904 ************************************ 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.904 ************************************ 00:10:08.904 START TEST nvmf_queue_depth 00:10:08.904 ************************************ 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.904 * Looking for test storage... 00:10:08.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:08.904 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.905 --rc genhtml_branch_coverage=1 00:10:08.905 --rc genhtml_function_coverage=1 00:10:08.905 --rc genhtml_legend=1 00:10:08.905 --rc geninfo_all_blocks=1 00:10:08.905 --rc geninfo_unexecuted_blocks=1 00:10:08.905 00:10:08.905 ' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.905 --rc genhtml_branch_coverage=1 00:10:08.905 --rc genhtml_function_coverage=1 00:10:08.905 --rc genhtml_legend=1 00:10:08.905 --rc geninfo_all_blocks=1 00:10:08.905 --rc geninfo_unexecuted_blocks=1 00:10:08.905 00:10:08.905 ' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.905 --rc genhtml_branch_coverage=1 00:10:08.905 --rc genhtml_function_coverage=1 00:10:08.905 --rc genhtml_legend=1 00:10:08.905 --rc geninfo_all_blocks=1 00:10:08.905 --rc geninfo_unexecuted_blocks=1 00:10:08.905 00:10:08.905 ' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.905 --rc genhtml_branch_coverage=1 00:10:08.905 --rc genhtml_function_coverage=1 00:10:08.905 --rc genhtml_legend=1 00:10:08.905 --rc geninfo_all_blocks=1 00:10:08.905 --rc geninfo_unexecuted_blocks=1 00:10:08.905 00:10:08.905 ' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.905 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:15.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:15.479 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:15.479 Found net devices under 0000:af:00.0: cvl_0_0 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:15.479 Found net devices under 0000:af:00.1: cvl_0_1 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.479 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:10:15.480 00:10:15.480 --- 10.0.0.2 ping statistics --- 00:10:15.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.480 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:10:15.480 00:10:15.480 --- 10.0.0.1 ping statistics --- 00:10:15.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.480 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=213591 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 213591 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 213591 ']' 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 [2024-12-16 12:31:40.766946] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:15.480 [2024-12-16 12:31:40.766996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.480 [2024-12-16 12:31:40.842260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.480 [2024-12-16 12:31:40.879973] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.480 [2024-12-16 12:31:40.880014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.480 [2024-12-16 12:31:40.880027] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.480 [2024-12-16 12:31:40.880033] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.480 [2024-12-16 12:31:40.880039] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.480 [2024-12-16 12:31:40.880056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.480 12:31:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 [2024-12-16 12:31:41.021339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 Malloc0 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 [2024-12-16 12:31:41.084000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=213783 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 213783 /var/tmp/bdevperf.sock 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 213783 ']' 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:15.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 [2024-12-16 12:31:41.133628] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:15.480 [2024-12-16 12:31:41.133668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213783 ] 00:10:15.480 [2024-12-16 12:31:41.201890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.480 [2024-12-16 12:31:41.241482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.480 NVMe0n1 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.480 12:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.480 Running I/O for 10 seconds... 00:10:17.799 12288.00 IOPS, 48.00 MiB/s [2024-12-16T11:31:44.804Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-16T11:31:45.741Z] 12292.00 IOPS, 48.02 MiB/s [2024-12-16T11:31:46.679Z] 12356.50 IOPS, 48.27 MiB/s [2024-12-16T11:31:47.617Z] 12471.60 IOPS, 48.72 MiB/s [2024-12-16T11:31:48.556Z] 12458.33 IOPS, 48.67 MiB/s [2024-12-16T11:31:49.935Z] 12479.57 IOPS, 48.75 MiB/s [2024-12-16T11:31:50.873Z] 12529.88 IOPS, 48.94 MiB/s [2024-12-16T11:31:51.811Z] 12502.67 IOPS, 48.84 MiB/s [2024-12-16T11:31:51.811Z] 12504.00 IOPS, 48.84 MiB/s 00:10:25.744 Latency(us) 00:10:25.745 [2024-12-16T11:31:51.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:25.745 Verification LBA range: start 0x0 length 0x4000 00:10:25.745 NVMe0n1 : 10.05 12534.94 48.96 0.00 0.00 81425.45 7645.87 55175.07 00:10:25.745 [2024-12-16T11:31:51.812Z] =================================================================================================================== 00:10:25.745 [2024-12-16T11:31:51.812Z] Total : 12534.94 48.96 0.00 0.00 81425.45 7645.87 55175.07 00:10:25.745 { 00:10:25.745 "results": [ 00:10:25.745 { 00:10:25.745 "job": "NVMe0n1", 00:10:25.745 "core_mask": "0x1", 00:10:25.745 "workload": "verify", 00:10:25.745 "status": "finished", 00:10:25.745 "verify_range": { 00:10:25.745 "start": 0, 00:10:25.745 "length": 16384 00:10:25.745 }, 00:10:25.745 "queue_depth": 1024, 00:10:25.745 "io_size": 4096, 00:10:25.745 "runtime": 10.048392, 00:10:25.745 "iops": 12534.940913929313, 00:10:25.745 "mibps": 48.96461294503638, 00:10:25.745 "io_failed": 0, 00:10:25.745 "io_timeout": 0, 00:10:25.745 "avg_latency_us": 81425.45254678505, 00:10:25.745 "min_latency_us": 7645.866666666667, 00:10:25.745 "max_latency_us": 55175.07047619048 00:10:25.745 } 00:10:25.745 ], 00:10:25.745 "core_count": 1 00:10:25.745 } 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 213783 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 213783 ']' 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 213783 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 213783 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 213783' 00:10:25.745 killing process with pid 213783 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 213783 00:10:25.745 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.745 00:10:25.745 Latency(us) 00:10:25.745 [2024-12-16T11:31:51.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.745 [2024-12-16T11:31:51.812Z] =================================================================================================================== 00:10:25.745 [2024-12-16T11:31:51.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.745 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 213783 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.005 rmmod nvme_tcp 00:10:26.005 rmmod nvme_fabrics 00:10:26.005 rmmod nvme_keyring 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 213591 ']' 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 213591 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 213591 ']' 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 213591 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 213591 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 213591' 00:10:26.005 killing process with pid 213591 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 213591 00:10:26.005 12:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 213591 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.265 12:31:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.174 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.174 00:10:28.174 real 0m19.701s 00:10:28.174 user 0m23.024s 00:10:28.174 sys 0m5.946s 00:10:28.174 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.174 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.174 ************************************ 00:10:28.174 END TEST nvmf_queue_depth 00:10:28.174 ************************************ 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 ************************************ 00:10:28.434 START TEST nvmf_target_multipath 00:10:28.434 ************************************ 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:28.434 * Looking for test storage... 00:10:28.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.434 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.435 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.695 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.695 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:28.695 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:28.695 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.695 12:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.268 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.268 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.268 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.268 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.268 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:35.269 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:35.269 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:35.269 Found net devices under 0000:af:00.0: cvl_0_0 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:35.269 Found net devices under 0000:af:00.1: cvl_0_1 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:10:35.269 00:10:35.269 --- 10.0.0.2 ping statistics --- 00:10:35.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.269 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:35.269 00:10:35.269 --- 10.0.0.1 ping statistics --- 00:10:35.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.269 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:10:35.269 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:35.270 only one NIC for nvmf test 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.270 rmmod nvme_tcp 00:10:35.270 rmmod nvme_fabrics 00:10:35.270 rmmod nvme_keyring 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.270 12:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.650 00:10:36.650 real 0m8.300s 00:10:36.650 user 0m1.815s 00:10:36.650 sys 0m4.489s 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.650 ************************************ 00:10:36.650 END TEST nvmf_target_multipath 00:10:36.650 ************************************ 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.650 ************************************ 00:10:36.650 START TEST nvmf_zcopy 00:10:36.650 ************************************ 00:10:36.650 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:36.910 * Looking for test storage... 00:10:36.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.910 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.911 --rc genhtml_branch_coverage=1 00:10:36.911 --rc genhtml_function_coverage=1 00:10:36.911 --rc genhtml_legend=1 00:10:36.911 --rc geninfo_all_blocks=1 00:10:36.911 --rc geninfo_unexecuted_blocks=1 00:10:36.911 00:10:36.911 ' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.911 --rc genhtml_branch_coverage=1 00:10:36.911 --rc genhtml_function_coverage=1 00:10:36.911 --rc genhtml_legend=1 00:10:36.911 --rc geninfo_all_blocks=1 00:10:36.911 --rc geninfo_unexecuted_blocks=1 00:10:36.911 00:10:36.911 ' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.911 --rc genhtml_branch_coverage=1 00:10:36.911 --rc genhtml_function_coverage=1 00:10:36.911 --rc genhtml_legend=1 00:10:36.911 --rc geninfo_all_blocks=1 00:10:36.911 --rc geninfo_unexecuted_blocks=1 00:10:36.911 00:10:36.911 ' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.911 --rc genhtml_branch_coverage=1 00:10:36.911 --rc genhtml_function_coverage=1 00:10:36.911 --rc genhtml_legend=1 00:10:36.911 --rc geninfo_all_blocks=1 00:10:36.911 --rc geninfo_unexecuted_blocks=1 00:10:36.911 00:10:36.911 ' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.911 12:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:43.487 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:43.487 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:43.487 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:43.488 Found net devices under 0000:af:00.0: cvl_0_0 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:43.488 Found net devices under 0000:af:00.1: cvl_0_1 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:10:43.488 00:10:43.488 --- 10.0.0.2 ping statistics --- 00:10:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.488 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:10:43.488 00:10:43.488 --- 10.0.0.1 ping statistics --- 00:10:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.488 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=222618 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 222618 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 222618 ']' 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.488 12:32:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 [2024-12-16 12:32:08.864933] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:43.488 [2024-12-16 12:32:08.864983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.488 [2024-12-16 12:32:08.936914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.488 [2024-12-16 12:32:08.976791] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.488 [2024-12-16 12:32:08.976830] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.488 [2024-12-16 12:32:08.976837] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.488 [2024-12-16 12:32:08.976843] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.488 [2024-12-16 12:32:08.976848] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.488 [2024-12-16 12:32:08.976865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 [2024-12-16 12:32:09.105713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 [2024-12-16 12:32:09.125880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.488 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.489 malloc0 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:43.489 { 00:10:43.489 "params": { 00:10:43.489 "name": "Nvme$subsystem", 00:10:43.489 "trtype": "$TEST_TRANSPORT", 00:10:43.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.489 "adrfam": "ipv4", 00:10:43.489 "trsvcid": "$NVMF_PORT", 00:10:43.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.489 "hdgst": ${hdgst:-false}, 00:10:43.489 "ddgst": ${ddgst:-false} 00:10:43.489 }, 00:10:43.489 "method": "bdev_nvme_attach_controller" 00:10:43.489 } 00:10:43.489 EOF 00:10:43.489 )") 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:43.489 12:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:43.489 "params": { 00:10:43.489 "name": "Nvme1", 00:10:43.489 "trtype": "tcp", 00:10:43.489 "traddr": "10.0.0.2", 00:10:43.489 "adrfam": "ipv4", 00:10:43.489 "trsvcid": "4420", 00:10:43.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.489 "hdgst": false, 00:10:43.489 "ddgst": false 00:10:43.489 }, 00:10:43.489 "method": "bdev_nvme_attach_controller" 00:10:43.489 }' 00:10:43.489 [2024-12-16 12:32:09.221398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:43.489 [2024-12-16 12:32:09.221438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222652 ] 00:10:43.489 [2024-12-16 12:32:09.280779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.489 [2024-12-16 12:32:09.319306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.748 Running I/O for 10 seconds... 00:10:45.634 8560.00 IOPS, 66.88 MiB/s [2024-12-16T11:32:13.081Z] 8676.00 IOPS, 67.78 MiB/s [2024-12-16T11:32:14.019Z] 8701.33 IOPS, 67.98 MiB/s [2024-12-16T11:32:14.957Z] 8724.50 IOPS, 68.16 MiB/s [2024-12-16T11:32:15.895Z] 8739.00 IOPS, 68.27 MiB/s [2024-12-16T11:32:16.833Z] 8749.83 IOPS, 68.36 MiB/s [2024-12-16T11:32:17.795Z] 8756.71 IOPS, 68.41 MiB/s [2024-12-16T11:32:18.733Z] 8762.38 IOPS, 68.46 MiB/s [2024-12-16T11:32:20.112Z] 8770.22 IOPS, 68.52 MiB/s [2024-12-16T11:32:20.112Z] 8775.50 IOPS, 68.56 MiB/s 00:10:54.045 Latency(us) 00:10:54.045 [2024-12-16T11:32:20.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.045 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:54.045 Verification LBA range: start 0x0 length 0x1000 00:10:54.045 Nvme1n1 : 10.01 8776.45 68.57 0.00 0.00 14543.19 333.53 21845.33 00:10:54.045 [2024-12-16T11:32:20.112Z] =================================================================================================================== 00:10:54.045 [2024-12-16T11:32:20.112Z] Total : 8776.45 68.57 0.00 0.00 14543.19 333.53 21845.33 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=224468 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:54.045 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:54.045 { 00:10:54.045 "params": { 00:10:54.045 "name": "Nvme$subsystem", 00:10:54.045 "trtype": "$TEST_TRANSPORT", 00:10:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.045 "adrfam": "ipv4", 00:10:54.045 "trsvcid": "$NVMF_PORT", 00:10:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.045 "hdgst": ${hdgst:-false}, 00:10:54.045 "ddgst": ${ddgst:-false} 00:10:54.046 }, 00:10:54.046 "method": "bdev_nvme_attach_controller" 00:10:54.046 } 00:10:54.046 EOF 00:10:54.046 )") 00:10:54.046 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:54.046 [2024-12-16 12:32:19.891976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.892008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:54.046 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:54.046 12:32:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:54.046 "params": { 00:10:54.046 "name": "Nvme1", 00:10:54.046 "trtype": "tcp", 00:10:54.046 "traddr": "10.0.0.2", 00:10:54.046 "adrfam": "ipv4", 00:10:54.046 "trsvcid": "4420", 00:10:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.046 "hdgst": false, 00:10:54.046 "ddgst": false 00:10:54.046 }, 00:10:54.046 "method": "bdev_nvme_attach_controller" 00:10:54.046 }' 00:10:54.046 [2024-12-16 12:32:19.903981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.903999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.916004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.916015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.928038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.928048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.929005] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:54.046 [2024-12-16 12:32:19.929048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224468 ] 00:10:54.046 [2024-12-16 12:32:19.940073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.940085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.952099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.952109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.964139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.964150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.976172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.976182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.988195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:19.988204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:19.996351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.046 [2024-12-16 12:32:20.000229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.000241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.012265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.012279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.024299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.024319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.036325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.036337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.039949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.046 [2024-12-16 12:32:20.048372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.048390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.060397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.060415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.072429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.072444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.084454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.084466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.096485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.096498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.046 [2024-12-16 12:32:20.108516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.046 [2024-12-16 12:32:20.108526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.120546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.120556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.132592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.132612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.144618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.144632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.156651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.156664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.168678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.168688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.180711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.180721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.192746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.192760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.204784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.204799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.216811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.216824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.228851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.228870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 Running I/O for 5 seconds... 00:10:54.305 [2024-12-16 12:32:20.240878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.240889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.305 [2024-12-16 12:32:20.256674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.305 [2024-12-16 12:32:20.256695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.265992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.266010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.275276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.275295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.289899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.289918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.303937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.303962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.318085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.318103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.332041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.332060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.346159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.346178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.306 [2024-12-16 12:32:20.359759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.306 [2024-12-16 12:32:20.359778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.373825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.373844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.387716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.387735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.401906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.401927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.413005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.413025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.422464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.422484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.437053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.437072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.450959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.450979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.465443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.465466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.478934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.478953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.493227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.493246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.507078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.507100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.520975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.520994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.534414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.534434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.548424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.548443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.562078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.562098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.575723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.575742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.589901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.589922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.603862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.603883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.565 [2024-12-16 12:32:20.617670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.565 [2024-12-16 12:32:20.617690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.631406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.631426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.645314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.645333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.659281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.659300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.673166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.673185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.686898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.686918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.701218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.701237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.715282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.715301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.728926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.728945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.742809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.742828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.756845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.756864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.770331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.770349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.784270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.784292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.798526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.798545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.812575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.812593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.826334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.826356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.839811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.839829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.853597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.825 [2024-12-16 12:32:20.853615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.825 [2024-12-16 12:32:20.866891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.826 [2024-12-16 12:32:20.866910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.826 [2024-12-16 12:32:20.880612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.826 [2024-12-16 12:32:20.880634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.826 [2024-12-16 12:32:20.890069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.826 [2024-12-16 12:32:20.890089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.085 [2024-12-16 12:32:20.904522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.085 [2024-12-16 12:32:20.904542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.085 [2024-12-16 12:32:20.918531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.085 [2024-12-16 12:32:20.918551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.085 [2024-12-16 12:32:20.932139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.085 [2024-12-16 12:32:20.932158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:20.945960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:20.945983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:20.959747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:20.959766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:20.973743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:20.973763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:20.987681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:20.987701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.001319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.001339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.015183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.015203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.028891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.028911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.042526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.042546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.056344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.056364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.070254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.070276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.083981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.084001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.098130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.098150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.111861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.111880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.125692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.125711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.086 [2024-12-16 12:32:21.139099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.086 [2024-12-16 12:32:21.139126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.152560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.152579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.166936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.166954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.182827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.182845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.196701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.196720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.210535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.210555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.224105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.224131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.238286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.238306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 16758.00 IOPS, 130.92 MiB/s [2024-12-16T11:32:21.413Z] [2024-12-16 12:32:21.251715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.251734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.265649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.265675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.279844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.279863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.290630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.290649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.305223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.305242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.319067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.319086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.327987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.328006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.342162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.342181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.355271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.355290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.368821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.368840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.382659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.382678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.396359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.396390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.346 [2024-12-16 12:32:21.410405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.346 [2024-12-16 12:32:21.410423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.423987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.424006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.437428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.437447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.451691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.451710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.465261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.465280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.478864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.478883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.492485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.492504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.506215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.606 [2024-12-16 12:32:21.506234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.606 [2024-12-16 12:32:21.520131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.520155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.533910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.533929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.547653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.547673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.561473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.561492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.575184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.575204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.588803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.588822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.602693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.602712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.616643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.616663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.629916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.629937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.643358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.643377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.657345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.657365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.607 [2024-12-16 12:32:21.666973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.607 [2024-12-16 12:32:21.666992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.866 [2024-12-16 12:32:21.681345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.866 [2024-12-16 12:32:21.681364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.866 [2024-12-16 12:32:21.695062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.695081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.708590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.708608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.722254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.722273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.735825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.735843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.749464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.749482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.762827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.762846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.776826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.776849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.790111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.790134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.804075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.804094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.818263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.818282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.828948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.828967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.842690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.842708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.851799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.851819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.866230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.866250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.879824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.879843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.893263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.893281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.906962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.906981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.867 [2024-12-16 12:32:21.921102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.867 [2024-12-16 12:32:21.921126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:21.934769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:21.934788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:21.948391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:21.948410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:21.962106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:21.962131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:21.976168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:21.976187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:21.989910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:21.989928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.003732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.003751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.017550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.017569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.031280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.031299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.044951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.044969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.058800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.058818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.072635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.072653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.086534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.086553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.100048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.100067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.113769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.113788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.127807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.127826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.141326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.141346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.155314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.155333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.168796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.168815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.127 [2024-12-16 12:32:22.182657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.127 [2024-12-16 12:32:22.182676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.196701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.196721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.210420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.210439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.223831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.223850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.237717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.237737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 16921.00 IOPS, 132.20 MiB/s [2024-12-16T11:32:22.454Z] [2024-12-16 12:32:22.251067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.251088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.265059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.265079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.278959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.278979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.292565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.292585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.306495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.306514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.319831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.319852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.333540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.333560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.347495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.347514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.360933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.360952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.370348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.370366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.379898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.379917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.393961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.393979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.407735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.407755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.421314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.421335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.435084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.435104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.387 [2024-12-16 12:32:22.448881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.387 [2024-12-16 12:32:22.448901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.462870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.462890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.476746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.476765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.490568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.490588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.504653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.504673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.518299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.518319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.531881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.531900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.545546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.545565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.559085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.559105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.572866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.572886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.586657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.586676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.600600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.600620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.614504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.614523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.628375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.628394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.641813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.641833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.655464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.655484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.669070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.669090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.682805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.682824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.696658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.696677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.647 [2024-12-16 12:32:22.710494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.647 [2024-12-16 12:32:22.710513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.724181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.724201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.737914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.737932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.752225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.752245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.763060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.763079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.776874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.776893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.790371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.790394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.804184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.804202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.817502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.817520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.831264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.907 [2024-12-16 12:32:22.831284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.907 [2024-12-16 12:32:22.844932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.844951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.858511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.858530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.868058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.868077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.882453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.882472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.895395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.895415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.909230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.909249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.922870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.922889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.936176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.936195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.949950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.949969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.908 [2024-12-16 12:32:22.963849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.908 [2024-12-16 12:32:22.963869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:22.977661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:22.977681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:22.992000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:22.992019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.005259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.005278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.018557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.018576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.032254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.032272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.046169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.046192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.059627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.059646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.073480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.073500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.086893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.086912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.100728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.100747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.114178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.114197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.128349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.128367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.141990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.142010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.155517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.155536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.169198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.169217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.182850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.182868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.196369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.196388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.210243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.210262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.168 [2024-12-16 12:32:23.223838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.168 [2024-12-16 12:32:23.223856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.237529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.237548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 16979.67 IOPS, 132.65 MiB/s [2024-12-16T11:32:23.495Z] [2024-12-16 12:32:23.251211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.251230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.264930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.264949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.278475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.278493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.291908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.291926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.305313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.305336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.319011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.319030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.332621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.332640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.346364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.346383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.360091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.360111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.373849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.373868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.387982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.388001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.399145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.399164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.413740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.413758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.427101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.427126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.440814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.440832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.454146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.454164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.467973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.467992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.428 [2024-12-16 12:32:23.481337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.428 [2024-12-16 12:32:23.481356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.495413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.495431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.509428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.509447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.523310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.523329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.537302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.537321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.550972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.550990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.564454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.564474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.577896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.577916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.592004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.592023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.605468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.605487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.619407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.619426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.632736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.632755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.646293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.646313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.660011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.660031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.673723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.688 [2024-12-16 12:32:23.673743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.688 [2024-12-16 12:32:23.687659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.689 [2024-12-16 12:32:23.687680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.689 [2024-12-16 12:32:23.700767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.689 [2024-12-16 12:32:23.700787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.689 [2024-12-16 12:32:23.714353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.689 [2024-12-16 12:32:23.714373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.689 [2024-12-16 12:32:23.727899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.689 [2024-12-16 12:32:23.727920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.689 [2024-12-16 12:32:23.741636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.689 [2024-12-16 12:32:23.741656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.755581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.755602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.769198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.769218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.782135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.782155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.795989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.796008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.809469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.809489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.823269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.823289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.836885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.836904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.851001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.851020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.864633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.864651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.878175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.878194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.891716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.891737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.905208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.905227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.918685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.918704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.932504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.932524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.946223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.948 [2024-12-16 12:32:23.946242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.948 [2024-12-16 12:32:23.959766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.949 [2024-12-16 12:32:23.959785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.949 [2024-12-16 12:32:23.973381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.949 [2024-12-16 12:32:23.973400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.949 [2024-12-16 12:32:23.987157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.949 [2024-12-16 12:32:23.987176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.949 [2024-12-16 12:32:24.000767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.949 [2024-12-16 12:32:24.000786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.014216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.014235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.027959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.027978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.041783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.041802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.055785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.055804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.069366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.069385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.083133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.083151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.096912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.096930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.111069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.111087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.124277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.124295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.138282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.138301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.152488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.152506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.166703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.166722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.179856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.179874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.193639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.193658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.207109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.207133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.220791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.220809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.234392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.234410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 17019.25 IOPS, 132.96 MiB/s [2024-12-16T11:32:24.276Z] [2024-12-16 12:32:24.248289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.248308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.209 [2024-12-16 12:32:24.261775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.209 [2024-12-16 12:32:24.261794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.275903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.275921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.286729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.286748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.300662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.300681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.314464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.314482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.327952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.327975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.342002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.342021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.352618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.352636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.366798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.366817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.380855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.380874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.391471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.391489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.405898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.405917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.419685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.419705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.433464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.433483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.447619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.447637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.458616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.458636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.472760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.469 [2024-12-16 12:32:24.472779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.469 [2024-12-16 12:32:24.486494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.470 [2024-12-16 12:32:24.486513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.470 [2024-12-16 12:32:24.500352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.470 [2024-12-16 12:32:24.500370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.470 [2024-12-16 12:32:24.514456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.470 [2024-12-16 12:32:24.514474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.470 [2024-12-16 12:32:24.528101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.470 [2024-12-16 12:32:24.528127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.542318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.542336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.557120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.557139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.571201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.571220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.584592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.584615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.598248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.598268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.611953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.611971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.626276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.626295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.639849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.639867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.653345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.653364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.667125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.667144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.681233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.681252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.694918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.694938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.708748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.708768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.722272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.722292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.736209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.736229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.749522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.749541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.763334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.763353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.777194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.777213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.730 [2024-12-16 12:32:24.790727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.730 [2024-12-16 12:32:24.790746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.804905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.804924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.818548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.818567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.831979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.831999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.845730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.845753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.859546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.859565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.873660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.873679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.887426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.887445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.901200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.901218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.914685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.914704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.928740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.990 [2024-12-16 12:32:24.928760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.990 [2024-12-16 12:32:24.942833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:24.942853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:24.956426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:24.956444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:24.970156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:24.970176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:24.983667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:24.983688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:24.997446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:24.997466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:25.010974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:25.010995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:25.024837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:25.024857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:25.039160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:25.039180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.991 [2024-12-16 12:32:25.052719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.991 [2024-12-16 12:32:25.052739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.067054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.067074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.081080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.081100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.095743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.095763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.106746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.106772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.120946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.120967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.130009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.130029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.144008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.144028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.157751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.157770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.172201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.172221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.188028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.188048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.201954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.201973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.215233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.215253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.224420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.224439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.238437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.238456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 17005.00 IOPS, 132.85 MiB/s [2024-12-16T11:32:25.318Z] [2024-12-16 12:32:25.252194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.252213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 00:10:59.251 Latency(us) 00:10:59.251 [2024-12-16T11:32:25.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.251 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:59.251 Nvme1n1 : 5.01 17008.36 132.88 0.00 0.00 7518.80 3245.59 16602.45 00:10:59.251 [2024-12-16T11:32:25.318Z] =================================================================================================================== 00:10:59.251 [2024-12-16T11:32:25.318Z] Total : 17008.36 132.88 0.00 0.00 7518.80 3245.59 16602.45 00:10:59.251 [2024-12-16 12:32:25.261492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.261510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.273519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.273535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.285563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.285581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.297589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.297606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.251 [2024-12-16 12:32:25.309622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.251 [2024-12-16 12:32:25.309638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.321648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.321663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.333678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.333696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.345711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.345727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.357743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.357756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.369771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.369781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.381806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.381818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.393836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.393846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.405870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.405881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.417898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.417909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 [2024-12-16 12:32:25.429932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.510 [2024-12-16 12:32:25.429942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (224468) - No such process 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 224468 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.510 delay0 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.510 12:32:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:59.769 [2024-12-16 12:32:25.613326] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:06.342 Initializing NVMe Controllers 00:11:06.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:06.342 Initialization complete. Launching workers. 00:11:06.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 724 00:11:06.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1003, failed to submit 41 00:11:06.342 success 818, unsuccessful 185, failed 0 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.342 rmmod nvme_tcp 00:11:06.342 rmmod nvme_fabrics 00:11:06.342 rmmod nvme_keyring 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 222618 ']' 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 222618 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 222618 ']' 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 222618 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 222618 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 222618' 00:11:06.342 killing process with pid 222618 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 222618 00:11:06.342 12:32:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 222618 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.342 12:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.252 00:11:08.252 real 0m31.500s 00:11:08.252 user 0m43.314s 00:11:08.252 sys 0m9.932s 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.252 ************************************ 00:11:08.252 END TEST nvmf_zcopy 00:11:08.252 ************************************ 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.252 ************************************ 00:11:08.252 START TEST nvmf_nmic 00:11:08.252 ************************************ 00:11:08.252 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:08.512 * Looking for test storage... 00:11:08.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.512 --rc genhtml_branch_coverage=1 00:11:08.512 --rc genhtml_function_coverage=1 00:11:08.512 --rc genhtml_legend=1 00:11:08.512 --rc geninfo_all_blocks=1 00:11:08.512 --rc geninfo_unexecuted_blocks=1 00:11:08.512 00:11:08.512 ' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.512 --rc genhtml_branch_coverage=1 00:11:08.512 --rc genhtml_function_coverage=1 00:11:08.512 --rc genhtml_legend=1 00:11:08.512 --rc geninfo_all_blocks=1 00:11:08.512 --rc geninfo_unexecuted_blocks=1 00:11:08.512 00:11:08.512 ' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.512 --rc genhtml_branch_coverage=1 00:11:08.512 --rc genhtml_function_coverage=1 00:11:08.512 --rc genhtml_legend=1 00:11:08.512 --rc geninfo_all_blocks=1 00:11:08.512 --rc geninfo_unexecuted_blocks=1 00:11:08.512 00:11:08.512 ' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.512 --rc genhtml_branch_coverage=1 00:11:08.512 --rc genhtml_function_coverage=1 00:11:08.512 --rc genhtml_legend=1 00:11:08.512 --rc geninfo_all_blocks=1 00:11:08.512 --rc geninfo_unexecuted_blocks=1 00:11:08.512 00:11:08.512 ' 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.512 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.513 12:32:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:15.088 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:15.088 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:15.088 Found net devices under 0000:af:00.0: cvl_0_0 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:15.088 Found net devices under 0000:af:00.1: cvl_0_1 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.088 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:15.089 00:11:15.089 --- 10.0.0.2 ping statistics --- 00:11:15.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.089 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:11:15.089 00:11:15.089 --- 10.0.0.1 ping statistics --- 00:11:15.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.089 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=229923 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 229923 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 229923 ']' 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 [2024-12-16 12:32:40.441059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:15.089 [2024-12-16 12:32:40.441102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.089 [2024-12-16 12:32:40.513885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.089 [2024-12-16 12:32:40.555599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.089 [2024-12-16 12:32:40.555636] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.089 [2024-12-16 12:32:40.555644] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.089 [2024-12-16 12:32:40.555650] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.089 [2024-12-16 12:32:40.555656] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.089 [2024-12-16 12:32:40.558133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.089 [2024-12-16 12:32:40.558158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.089 [2024-12-16 12:32:40.558266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.089 [2024-12-16 12:32:40.558267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 [2024-12-16 12:32:40.725054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 Malloc0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 [2024-12-16 12:32:40.776468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:15.089 test case1: single bdev can't be used in multiple subsystems 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.089 [2024-12-16 12:32:40.800346] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:15.089 [2024-12-16 12:32:40.800367] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:15.089 [2024-12-16 12:32:40.800378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 request: 00:11:15.089 { 00:11:15.089 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:15.089 "namespace": { 00:11:15.089 "bdev_name": "Malloc0", 00:11:15.089 "no_auto_visible": false 00:11:15.089 }, 00:11:15.089 "method": "nvmf_subsystem_add_ns", 00:11:15.089 "req_id": 1 00:11:15.089 } 00:11:15.089 Got JSON-RPC error response 00:11:15.089 response: 00:11:15.089 { 00:11:15.089 "code": -32602, 00:11:15.089 "message": "Invalid parameters" 00:11:15.089 } 00:11:15.089 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:15.090 Adding namespace failed - expected result. 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:15.090 test case2: host connect to nvmf target in multiple paths 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.090 [2024-12-16 12:32:40.812484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.090 12:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.028 12:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:16.966 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.966 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.966 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.966 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:16.966 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.502 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.502 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.502 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.502 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:19.502 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.502 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:19.503 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:19.503 [global] 00:11:19.503 thread=1 00:11:19.503 invalidate=1 00:11:19.503 rw=write 00:11:19.503 time_based=1 00:11:19.503 runtime=1 00:11:19.503 ioengine=libaio 00:11:19.503 direct=1 00:11:19.503 bs=4096 00:11:19.503 iodepth=1 00:11:19.503 norandommap=0 00:11:19.503 numjobs=1 00:11:19.503 00:11:19.503 verify_dump=1 00:11:19.503 verify_backlog=512 00:11:19.503 verify_state_save=0 00:11:19.503 do_verify=1 00:11:19.503 verify=crc32c-intel 00:11:19.503 [job0] 00:11:19.503 filename=/dev/nvme0n1 00:11:19.503 Could not set queue depth (nvme0n1) 00:11:19.761 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.761 fio-3.35 00:11:19.761 Starting 1 thread 00:11:20.700 00:11:20.700 job0: (groupid=0, jobs=1): err= 0: pid=230848: Mon Dec 16 12:32:46 2024 00:11:20.700 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:11:20.700 slat (nsec): min=9984, max=25361, avg=21306.78, stdev=2621.40 00:11:20.700 clat (usec): min=40917, max=41129, avg=40977.55, stdev=42.72 00:11:20.700 lat (usec): min=40939, max=41139, avg=40998.86, stdev=40.93 00:11:20.700 clat percentiles (usec): 00:11:20.700 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:20.700 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.700 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:20.700 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:20.700 | 99.99th=[41157] 00:11:20.700 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:20.700 slat (nsec): min=9724, max=46088, avg=10928.06, stdev=2116.13 00:11:20.700 clat (usec): min=115, max=309, avg=138.76, stdev=19.45 00:11:20.700 lat (usec): min=125, max=355, avg=149.68, stdev=20.34 00:11:20.700 clat percentiles (usec): 00:11:20.700 | 1.00th=[ 119], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:11:20.700 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:11:20.700 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 172], 00:11:20.700 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 310], 99.95th=[ 310], 00:11:20.700 | 99.99th=[ 310] 00:11:20.700 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.700 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.700 lat (usec) : 250=95.51%, 500=0.19% 00:11:20.700 lat (msec) : 50=4.30% 00:11:20.700 cpu : usr=0.69%, sys=0.49%, ctx=535, majf=0, minf=1 00:11:20.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.700 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.700 00:11:20.700 Run status group 0 (all jobs): 00:11:20.700 READ: bw=90.1KiB/s (92.3kB/s), 90.1KiB/s-90.1KiB/s (92.3kB/s-92.3kB/s), io=92.0KiB (94.2kB), run=1021-1021msec 00:11:20.700 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:11:20.700 00:11:20.700 Disk stats (read/write): 00:11:20.700 nvme0n1: ios=70/512, merge=0/0, ticks=834/66, in_queue=900, util=91.38% 00:11:20.700 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.960 rmmod nvme_tcp 00:11:20.960 rmmod nvme_fabrics 00:11:20.960 rmmod nvme_keyring 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 229923 ']' 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 229923 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 229923 ']' 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 229923 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.960 12:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229923 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229923' 00:11:21.220 killing process with pid 229923 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 229923 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 229923 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.220 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.759 00:11:23.759 real 0m15.068s 00:11:23.759 user 0m33.690s 00:11:23.759 sys 0m5.392s 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 ************************************ 00:11:23.759 END TEST nvmf_nmic 00:11:23.759 ************************************ 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 ************************************ 00:11:23.759 START TEST nvmf_fio_target 00:11:23.759 ************************************ 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:23.759 * Looking for test storage... 00:11:23.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.759 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.760 --rc genhtml_branch_coverage=1 00:11:23.760 --rc genhtml_function_coverage=1 00:11:23.760 --rc genhtml_legend=1 00:11:23.760 --rc geninfo_all_blocks=1 00:11:23.760 --rc geninfo_unexecuted_blocks=1 00:11:23.760 00:11:23.760 ' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.760 --rc genhtml_branch_coverage=1 00:11:23.760 --rc genhtml_function_coverage=1 00:11:23.760 --rc genhtml_legend=1 00:11:23.760 --rc geninfo_all_blocks=1 00:11:23.760 --rc geninfo_unexecuted_blocks=1 00:11:23.760 00:11:23.760 ' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.760 --rc genhtml_branch_coverage=1 00:11:23.760 --rc genhtml_function_coverage=1 00:11:23.760 --rc genhtml_legend=1 00:11:23.760 --rc geninfo_all_blocks=1 00:11:23.760 --rc geninfo_unexecuted_blocks=1 00:11:23.760 00:11:23.760 ' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.760 --rc genhtml_branch_coverage=1 00:11:23.760 --rc genhtml_function_coverage=1 00:11:23.760 --rc genhtml_legend=1 00:11:23.760 --rc geninfo_all_blocks=1 00:11:23.760 --rc geninfo_unexecuted_blocks=1 00:11:23.760 00:11:23.760 ' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.760 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.337 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.337 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.337 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.337 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.337 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:30.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:30.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:30.338 Found net devices under 0000:af:00.0: cvl_0_0 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:30.338 Found net devices under 0000:af:00.1: cvl_0_1 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:11:30.338 00:11:30.338 --- 10.0.0.2 ping statistics --- 00:11:30.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.338 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:11:30.338 00:11:30.338 --- 10.0.0.1 ping statistics --- 00:11:30.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.338 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:30.338 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=234681 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 234681 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 234681 ']' 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.339 [2024-12-16 12:32:55.657085] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:30.339 [2024-12-16 12:32:55.657134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.339 [2024-12-16 12:32:55.731726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.339 [2024-12-16 12:32:55.772077] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.339 [2024-12-16 12:32:55.772126] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.339 [2024-12-16 12:32:55.772135] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.339 [2024-12-16 12:32:55.772141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.339 [2024-12-16 12:32:55.772147] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.339 [2024-12-16 12:32:55.772190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.339 [2024-12-16 12:32:55.772209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.339 [2024-12-16 12:32:55.772307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.339 [2024-12-16 12:32:55.772308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.339 12:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:30.339 [2024-12-16 12:32:56.084891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.339 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.339 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:30.339 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.598 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:30.598 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.857 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:30.857 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.116 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:31.116 12:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:31.116 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.376 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:31.376 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.635 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:31.635 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.894 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:31.894 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:32.154 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.154 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:32.154 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.413 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:32.413 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:32.671 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.929 [2024-12-16 12:32:58.759098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.930 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:32.930 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:33.187 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.565 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:34.565 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:34.565 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.565 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:34.565 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:34.565 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:36.471 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:36.471 [global] 00:11:36.471 thread=1 00:11:36.471 invalidate=1 00:11:36.471 rw=write 00:11:36.471 time_based=1 00:11:36.471 runtime=1 00:11:36.471 ioengine=libaio 00:11:36.471 direct=1 00:11:36.471 bs=4096 00:11:36.471 iodepth=1 00:11:36.471 norandommap=0 00:11:36.471 numjobs=1 00:11:36.471 00:11:36.471 verify_dump=1 00:11:36.471 verify_backlog=512 00:11:36.471 verify_state_save=0 00:11:36.471 do_verify=1 00:11:36.471 verify=crc32c-intel 00:11:36.471 [job0] 00:11:36.471 filename=/dev/nvme0n1 00:11:36.471 [job1] 00:11:36.471 filename=/dev/nvme0n2 00:11:36.471 [job2] 00:11:36.471 filename=/dev/nvme0n3 00:11:36.471 [job3] 00:11:36.471 filename=/dev/nvme0n4 00:11:36.471 Could not set queue depth (nvme0n1) 00:11:36.471 Could not set queue depth (nvme0n2) 00:11:36.471 Could not set queue depth (nvme0n3) 00:11:36.471 Could not set queue depth (nvme0n4) 00:11:36.731 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.731 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.731 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.731 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.731 fio-3.35 00:11:36.731 Starting 4 threads 00:11:38.138 00:11:38.138 job0: (groupid=0, jobs=1): err= 0: pid=236241: Mon Dec 16 12:33:03 2024 00:11:38.138 read: IOPS=137, BW=551KiB/s (565kB/s)(552KiB/1001msec) 00:11:38.138 slat (nsec): min=7211, max=43729, avg=10724.34, stdev=6201.72 00:11:38.138 clat (usec): min=192, max=41981, avg=6490.44, stdev=14809.19 00:11:38.138 lat (usec): min=200, max=42004, avg=6501.16, stdev=14814.16 00:11:38.138 clat percentiles (usec): 00:11:38.138 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:11:38.138 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:11:38.138 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[41157], 95.00th=[41681], 00:11:38.138 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:38.138 | 99.99th=[42206] 00:11:38.138 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:38.138 slat (nsec): min=9877, max=55678, avg=11803.89, stdev=2646.38 00:11:38.138 clat (usec): min=147, max=398, avg=186.03, stdev=23.13 00:11:38.138 lat (usec): min=158, max=453, avg=197.84, stdev=24.24 00:11:38.138 clat percentiles (usec): 00:11:38.138 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 172], 00:11:38.138 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:38.138 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 221], 00:11:38.138 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 400], 99.95th=[ 400], 00:11:38.138 | 99.99th=[ 400] 00:11:38.138 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:11:38.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:38.138 lat (usec) : 250=90.62%, 500=6.15% 00:11:38.138 lat (msec) : 50=3.23% 00:11:38.138 cpu : usr=0.30%, sys=1.40%, ctx=650, majf=0, minf=1 00:11:38.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.138 issued rwts: total=138,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.138 job1: (groupid=0, jobs=1): err= 0: pid=236253: Mon Dec 16 12:33:03 2024 00:11:38.138 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:38.138 slat (nsec): min=6802, max=25508, avg=7738.37, stdev=1242.55 00:11:38.138 clat (usec): min=148, max=398, avg=197.36, stdev=30.15 00:11:38.138 lat (usec): min=156, max=406, avg=205.10, stdev=30.22 00:11:38.138 clat percentiles (usec): 00:11:38.138 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:11:38.138 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:11:38.138 | 70.00th=[ 200], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 258], 00:11:38.138 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 338], 99.95th=[ 375], 00:11:38.138 | 99.99th=[ 400] 00:11:38.138 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:11:38.138 slat (nsec): min=9593, max=51305, avg=10852.85, stdev=1585.20 00:11:38.138 clat (usec): min=111, max=366, avg=140.12, stdev=23.71 00:11:38.138 lat (usec): min=121, max=393, avg=150.98, stdev=24.21 00:11:38.138 clat percentiles (usec): 00:11:38.138 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 126], 00:11:38.138 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:11:38.138 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 178], 00:11:38.138 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 322], 00:11:38.138 | 99.99th=[ 367] 00:11:38.138 bw ( KiB/s): min=12288, max=12288, per=67.60%, avg=12288.00, stdev= 0.00, samples=1 00:11:38.138 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:38.138 lat (usec) : 250=95.74%, 500=4.26% 00:11:38.138 cpu : usr=4.40%, sys=8.70%, ctx=5605, majf=0, minf=1 00:11:38.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.138 issued rwts: total=2560,3045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.138 job2: (groupid=0, jobs=1): err= 0: pid=236259: Mon Dec 16 12:33:03 2024 00:11:38.139 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:11:38.139 slat (nsec): min=11903, max=23297, avg=14467.96, stdev=3661.41 00:11:38.139 clat (usec): min=227, max=41024, avg=39191.16, stdev=8494.12 00:11:38.139 lat (usec): min=239, max=41039, avg=39205.62, stdev=8494.62 00:11:38.139 clat percentiles (usec): 00:11:38.139 | 1.00th=[ 227], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:38.139 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:38.139 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:38.139 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:38.139 | 99.99th=[41157] 00:11:38.139 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:38.139 slat (nsec): min=11340, max=91965, avg=17255.33, stdev=6731.13 00:11:38.139 clat (usec): min=137, max=299, avg=180.50, stdev=17.76 00:11:38.139 lat (usec): min=158, max=391, avg=197.76, stdev=19.54 00:11:38.139 clat percentiles (usec): 00:11:38.139 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:11:38.139 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:11:38.139 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:11:38.139 | 99.00th=[ 237], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 302], 00:11:38.139 | 99.99th=[ 302] 00:11:38.139 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:11:38.139 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:38.139 lat (usec) : 250=95.33%, 500=0.56% 00:11:38.139 lat (msec) : 50=4.11% 00:11:38.139 cpu : usr=0.50%, sys=1.00%, ctx=537, majf=0, minf=1 00:11:38.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.139 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.139 job3: (groupid=0, jobs=1): err= 0: pid=236260: Mon Dec 16 12:33:03 2024 00:11:38.139 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:11:38.139 slat (nsec): min=10115, max=23521, avg=20833.30, stdev=3697.88 00:11:38.139 clat (usec): min=239, max=41044, avg=39186.80, stdev=8490.59 00:11:38.139 lat (usec): min=261, max=41066, avg=39207.63, stdev=8490.34 00:11:38.139 clat percentiles (usec): 00:11:38.139 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:38.139 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:38.139 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:38.139 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:38.139 | 99.99th=[41157] 00:11:38.139 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:38.139 slat (nsec): min=10263, max=49521, avg=11996.17, stdev=3288.09 00:11:38.139 clat (usec): min=126, max=414, avg=192.46, stdev=28.32 00:11:38.139 lat (usec): min=138, max=427, avg=204.45, stdev=29.12 00:11:38.139 clat percentiles (usec): 00:11:38.139 | 1.00th=[ 141], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:11:38.139 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:11:38.139 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 237], 95.00th=[ 245], 00:11:38.139 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 416], 99.95th=[ 416], 00:11:38.139 | 99.99th=[ 416] 00:11:38.139 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:11:38.139 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:38.139 lat (usec) : 250=94.77%, 500=1.12% 00:11:38.139 lat (msec) : 50=4.11% 00:11:38.139 cpu : usr=0.20%, sys=1.19%, ctx=535, majf=0, minf=1 00:11:38.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.139 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.139 00:11:38.139 Run status group 0 (all jobs): 00:11:38.139 READ: bw=10.6MiB/s (11.1MB/s), 91.3KiB/s-9.99MiB/s (93.5kB/s-10.5MB/s), io=10.7MiB (11.2MB), run=1001-1008msec 00:11:38.139 WRITE: bw=17.8MiB/s (18.6MB/s), 2032KiB/s-11.9MiB/s (2081kB/s-12.5MB/s), io=17.9MiB (18.8MB), run=1001-1008msec 00:11:38.139 00:11:38.139 Disk stats (read/write): 00:11:38.139 nvme0n1: ios=67/512, merge=0/0, ticks=725/91, in_queue=816, util=85.47% 00:11:38.139 nvme0n2: ios=2126/2560, merge=0/0, ticks=406/348, in_queue=754, util=86.01% 00:11:38.139 nvme0n3: ios=46/512, merge=0/0, ticks=1681/85, in_queue=1766, util=97.16% 00:11:38.139 nvme0n4: ios=18/512, merge=0/0, ticks=697/97, in_queue=794, util=89.57% 00:11:38.139 12:33:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:38.139 [global] 00:11:38.139 thread=1 00:11:38.139 invalidate=1 00:11:38.139 rw=randwrite 00:11:38.139 time_based=1 00:11:38.139 runtime=1 00:11:38.139 ioengine=libaio 00:11:38.139 direct=1 00:11:38.139 bs=4096 00:11:38.139 iodepth=1 00:11:38.139 norandommap=0 00:11:38.139 numjobs=1 00:11:38.139 00:11:38.139 verify_dump=1 00:11:38.139 verify_backlog=512 00:11:38.139 verify_state_save=0 00:11:38.139 do_verify=1 00:11:38.139 verify=crc32c-intel 00:11:38.139 [job0] 00:11:38.139 filename=/dev/nvme0n1 00:11:38.139 [job1] 00:11:38.139 filename=/dev/nvme0n2 00:11:38.139 [job2] 00:11:38.139 filename=/dev/nvme0n3 00:11:38.139 [job3] 00:11:38.139 filename=/dev/nvme0n4 00:11:38.139 Could not set queue depth (nvme0n1) 00:11:38.139 Could not set queue depth (nvme0n2) 00:11:38.139 Could not set queue depth (nvme0n3) 00:11:38.139 Could not set queue depth (nvme0n4) 00:11:38.396 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.396 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.396 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.397 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.397 fio-3.35 00:11:38.397 Starting 4 threads 00:11:39.780 00:11:39.780 job0: (groupid=0, jobs=1): err= 0: pid=236623: Mon Dec 16 12:33:05 2024 00:11:39.780 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:39.780 slat (nsec): min=5836, max=24479, avg=7793.89, stdev=1584.93 00:11:39.780 clat (usec): min=183, max=32862, avg=261.55, stdev=722.51 00:11:39.780 lat (usec): min=190, max=32871, avg=269.34, stdev=722.58 00:11:39.780 clat percentiles (usec): 00:11:39.780 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:11:39.780 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:11:39.780 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 371], 00:11:39.780 | 99.00th=[ 433], 99.50th=[ 461], 99.90th=[ 523], 99.95th=[ 553], 00:11:39.780 | 99.99th=[32900] 00:11:39.780 write: IOPS=2496, BW=9986KiB/s (10.2MB/s)(9996KiB/1001msec); 0 zone resets 00:11:39.780 slat (nsec): min=8172, max=84220, avg=10726.33, stdev=2808.19 00:11:39.780 clat (usec): min=114, max=438, avg=163.98, stdev=24.27 00:11:39.780 lat (usec): min=123, max=448, avg=174.71, stdev=24.87 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:11:39.781 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:11:39.781 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 204], 00:11:39.781 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 363], 00:11:39.781 | 99.99th=[ 441] 00:11:39.781 bw ( KiB/s): min=10072, max=10072, per=30.53%, avg=10072.00, stdev= 0.00, samples=1 00:11:39.781 iops : min= 2518, max= 2518, avg=2518.00, stdev= 0.00, samples=1 00:11:39.781 lat (usec) : 250=86.41%, 500=13.46%, 750=0.11% 00:11:39.781 lat (msec) : 50=0.02% 00:11:39.781 cpu : usr=1.70%, sys=5.10%, ctx=4548, majf=0, minf=1 00:11:39.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 issued rwts: total=2048,2499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.781 job1: (groupid=0, jobs=1): err= 0: pid=236624: Mon Dec 16 12:33:05 2024 00:11:39.781 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:39.781 slat (nsec): min=5931, max=39016, avg=8565.61, stdev=2398.94 00:11:39.781 clat (usec): min=179, max=6554, avg=261.78, stdev=151.85 00:11:39.781 lat (usec): min=187, max=6563, avg=270.35, stdev=152.02 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:11:39.781 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:11:39.781 | 70.00th=[ 258], 80.00th=[ 285], 90.00th=[ 363], 95.00th=[ 396], 00:11:39.781 | 99.00th=[ 420], 99.50th=[ 449], 99.90th=[ 701], 99.95th=[ 857], 00:11:39.781 | 99.99th=[ 6587] 00:11:39.781 write: IOPS=2463, BW=9854KiB/s (10.1MB/s)(9864KiB/1001msec); 0 zone resets 00:11:39.781 slat (nsec): min=8322, max=45781, avg=11378.15, stdev=2340.05 00:11:39.781 clat (usec): min=124, max=962, avg=164.39, stdev=31.33 00:11:39.781 lat (usec): min=136, max=973, avg=175.76, stdev=31.45 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:11:39.781 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:11:39.781 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 233], 00:11:39.781 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 515], 99.95th=[ 594], 00:11:39.781 | 99.99th=[ 963] 00:11:39.781 bw ( KiB/s): min=10136, max=10136, per=30.72%, avg=10136.00, stdev= 0.00, samples=1 00:11:39.781 iops : min= 2534, max= 2534, avg=2534.00, stdev= 0.00, samples=1 00:11:39.781 lat (usec) : 250=82.63%, 500=17.15%, 750=0.16%, 1000=0.04% 00:11:39.781 lat (msec) : 10=0.02% 00:11:39.781 cpu : usr=2.00%, sys=5.20%, ctx=4515, majf=0, minf=1 00:11:39.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 issued rwts: total=2048,2466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.781 job2: (groupid=0, jobs=1): err= 0: pid=236625: Mon Dec 16 12:33:05 2024 00:11:39.781 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:39.781 slat (nsec): min=6426, max=25544, avg=7747.07, stdev=1242.85 00:11:39.781 clat (usec): min=192, max=42336, avg=416.77, stdev=2341.30 00:11:39.781 lat (usec): min=200, max=42346, avg=424.52, stdev=2341.96 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:11:39.781 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:11:39.781 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 343], 95.00th=[ 400], 00:11:39.781 | 99.00th=[ 433], 99.50th=[13304], 99.90th=[42206], 99.95th=[42206], 00:11:39.781 | 99.99th=[42206] 00:11:39.781 write: IOPS=1952, BW=7808KiB/s (7996kB/s)(7816KiB/1001msec); 0 zone resets 00:11:39.781 slat (nsec): min=9468, max=45726, avg=10734.60, stdev=1620.91 00:11:39.781 clat (usec): min=121, max=318, avg=162.86, stdev=17.36 00:11:39.781 lat (usec): min=132, max=364, avg=173.60, stdev=17.57 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:11:39.781 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:11:39.781 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:11:39.781 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 247], 99.95th=[ 318], 00:11:39.781 | 99.99th=[ 318] 00:11:39.781 bw ( KiB/s): min= 5040, max= 5040, per=15.27%, avg=5040.00, stdev= 0.00, samples=1 00:11:39.781 iops : min= 1260, max= 1260, avg=1260.00, stdev= 0.00, samples=1 00:11:39.781 lat (usec) : 250=82.21%, 500=17.51%, 750=0.06% 00:11:39.781 lat (msec) : 20=0.06%, 50=0.17% 00:11:39.781 cpu : usr=1.30%, sys=3.90%, ctx=3491, majf=0, minf=1 00:11:39.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 issued rwts: total=1536,1954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.781 job3: (groupid=0, jobs=1): err= 0: pid=236626: Mon Dec 16 12:33:05 2024 00:11:39.781 read: IOPS=1308, BW=5233KiB/s (5359kB/s)(5364KiB/1025msec) 00:11:39.781 slat (nsec): min=6578, max=36521, avg=8292.55, stdev=2635.61 00:11:39.781 clat (usec): min=174, max=42015, avg=546.12, stdev=3278.01 00:11:39.781 lat (usec): min=181, max=42039, avg=554.41, stdev=3278.17 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:11:39.781 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 245], 00:11:39.781 | 70.00th=[ 255], 80.00th=[ 297], 90.00th=[ 412], 95.00th=[ 461], 00:11:39.781 | 99.00th=[ 523], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:11:39.781 | 99.99th=[42206] 00:11:39.781 write: IOPS=1498, BW=5994KiB/s (6138kB/s)(6144KiB/1025msec); 0 zone resets 00:11:39.781 slat (nsec): min=8552, max=45306, avg=10898.85, stdev=2494.39 00:11:39.781 clat (usec): min=128, max=371, avg=166.82, stdev=18.37 00:11:39.781 lat (usec): min=140, max=416, avg=177.72, stdev=19.17 00:11:39.781 clat percentiles (usec): 00:11:39.781 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:11:39.781 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:11:39.781 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 198], 00:11:39.781 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 318], 99.95th=[ 371], 00:11:39.781 | 99.99th=[ 371] 00:11:39.781 bw ( KiB/s): min= 4096, max= 8192, per=18.62%, avg=6144.00, stdev=2896.31, samples=2 00:11:39.781 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:39.781 lat (usec) : 250=83.59%, 500=15.47%, 750=0.56% 00:11:39.781 lat (msec) : 10=0.03%, 20=0.03%, 50=0.31% 00:11:39.781 cpu : usr=1.37%, sys=2.83%, ctx=2878, majf=0, minf=1 00:11:39.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.781 issued rwts: total=1341,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.781 00:11:39.781 Run status group 0 (all jobs): 00:11:39.781 READ: bw=26.6MiB/s (27.9MB/s), 5233KiB/s-8184KiB/s (5359kB/s-8380kB/s), io=27.2MiB (28.6MB), run=1001-1025msec 00:11:39.781 WRITE: bw=32.2MiB/s (33.8MB/s), 5994KiB/s-9986KiB/s (6138kB/s-10.2MB/s), io=33.0MiB (34.6MB), run=1001-1025msec 00:11:39.781 00:11:39.781 Disk stats (read/write): 00:11:39.781 nvme0n1: ios=1562/2048, merge=0/0, ticks=1363/327, in_queue=1690, util=96.19% 00:11:39.781 nvme0n2: ios=1631/2048, merge=0/0, ticks=1388/328, in_queue=1716, util=96.08% 00:11:39.781 nvme0n3: ios=1080/1486, merge=0/0, ticks=1854/236, in_queue=2090, util=95.94% 00:11:39.781 nvme0n4: ios=1056/1086, merge=0/0, ticks=1137/170, in_queue=1307, util=95.74% 00:11:39.781 12:33:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:39.781 [global] 00:11:39.781 thread=1 00:11:39.781 invalidate=1 00:11:39.781 rw=write 00:11:39.781 time_based=1 00:11:39.781 runtime=1 00:11:39.781 ioengine=libaio 00:11:39.781 direct=1 00:11:39.781 bs=4096 00:11:39.781 iodepth=128 00:11:39.781 norandommap=0 00:11:39.781 numjobs=1 00:11:39.781 00:11:39.781 verify_dump=1 00:11:39.781 verify_backlog=512 00:11:39.781 verify_state_save=0 00:11:39.781 do_verify=1 00:11:39.781 verify=crc32c-intel 00:11:39.781 [job0] 00:11:39.781 filename=/dev/nvme0n1 00:11:39.781 [job1] 00:11:39.781 filename=/dev/nvme0n2 00:11:39.781 [job2] 00:11:39.781 filename=/dev/nvme0n3 00:11:39.781 [job3] 00:11:39.781 filename=/dev/nvme0n4 00:11:39.781 Could not set queue depth (nvme0n1) 00:11:39.781 Could not set queue depth (nvme0n2) 00:11:39.781 Could not set queue depth (nvme0n3) 00:11:39.781 Could not set queue depth (nvme0n4) 00:11:40.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:40.039 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:40.039 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:40.039 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:40.039 fio-3.35 00:11:40.039 Starting 4 threads 00:11:41.411 00:11:41.411 job0: (groupid=0, jobs=1): err= 0: pid=236998: Mon Dec 16 12:33:07 2024 00:11:41.411 read: IOPS=5806, BW=22.7MiB/s (23.8MB/s)(23.7MiB/1046msec) 00:11:41.411 slat (nsec): min=1413, max=9460.0k, avg=89704.06, stdev=650857.76 00:11:41.411 clat (usec): min=3641, max=56398, avg=11783.38, stdev=6080.67 00:11:41.411 lat (usec): min=3648, max=56736, avg=11873.08, stdev=6100.57 00:11:41.411 clat percentiles (usec): 00:11:41.411 | 1.00th=[ 4621], 5.00th=[ 8094], 10.00th=[ 9503], 20.00th=[ 9765], 00:11:41.411 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10552], 00:11:41.411 | 70.00th=[11076], 80.00th=[12649], 90.00th=[15795], 95.00th=[17433], 00:11:41.411 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:11:41.411 | 99.99th=[56361] 00:11:41.411 write: IOPS=5873, BW=22.9MiB/s (24.1MB/s)(24.0MiB/1046msec); 0 zone resets 00:11:41.411 slat (usec): min=2, max=8229, avg=69.71, stdev=295.70 00:11:41.411 clat (usec): min=1499, max=36820, avg=9949.99, stdev=3845.63 00:11:41.411 lat (usec): min=1512, max=36825, avg=10019.70, stdev=3873.26 00:11:41.411 clat percentiles (usec): 00:11:41.411 | 1.00th=[ 2868], 5.00th=[ 4555], 10.00th=[ 6325], 20.00th=[ 8848], 00:11:41.411 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10159], 00:11:41.411 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[12125], 00:11:41.411 | 99.00th=[31589], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:11:41.411 | 99.99th=[36963] 00:11:41.411 bw ( KiB/s): min=24208, max=24894, per=34.57%, avg=24551.00, stdev=485.08, samples=2 00:11:41.411 iops : min= 6052, max= 6223, avg=6137.50, stdev=120.92, samples=2 00:11:41.411 lat (msec) : 2=0.02%, 4=2.04%, 10=40.42%, 20=55.25%, 50=1.77% 00:11:41.411 lat (msec) : 100=0.51% 00:11:41.411 cpu : usr=3.54%, sys=5.65%, ctx=800, majf=0, minf=1 00:11:41.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:41.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.411 issued rwts: total=6074,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.411 job1: (groupid=0, jobs=1): err= 0: pid=237001: Mon Dec 16 12:33:07 2024 00:11:41.411 read: IOPS=2590, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1012msec) 00:11:41.411 slat (nsec): min=1494, max=20497k, avg=154762.28, stdev=1131802.11 00:11:41.411 clat (usec): min=5863, max=43510, avg=18623.26, stdev=6787.85 00:11:41.411 lat (usec): min=5869, max=53526, avg=18778.03, stdev=6888.40 00:11:41.411 clat percentiles (usec): 00:11:41.411 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[11994], 00:11:41.411 | 30.00th=[12780], 40.00th=[17695], 50.00th=[20055], 60.00th=[20841], 00:11:41.411 | 70.00th=[21103], 80.00th=[23987], 90.00th=[28181], 95.00th=[31065], 00:11:41.411 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38536], 99.95th=[39584], 00:11:41.411 | 99.99th=[43254] 00:11:41.411 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:11:41.411 slat (usec): min=2, max=16048, avg=187.98, stdev=934.89 00:11:41.411 clat (usec): min=1642, max=102966, avg=25898.37, stdev=15822.47 00:11:41.411 lat (usec): min=1658, max=102976, avg=26086.34, stdev=15909.47 00:11:41.411 clat percentiles (msec): 00:11:41.411 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 21], 00:11:41.411 | 30.00th=[ 21], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 22], 00:11:41.411 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 46], 95.00th=[ 61], 00:11:41.411 | 99.00th=[ 95], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 104], 00:11:41.411 | 99.99th=[ 104] 00:11:41.411 bw ( KiB/s): min=11768, max=12263, per=16.92%, avg=12015.50, stdev=350.02, samples=2 00:11:41.411 iops : min= 2942, max= 3065, avg=3003.50, stdev=86.97, samples=2 00:11:41.411 lat (msec) : 2=0.04%, 4=0.11%, 10=8.59%, 20=24.64%, 50=61.94% 00:11:41.411 lat (msec) : 100=4.46%, 250=0.23% 00:11:41.411 cpu : usr=2.47%, sys=3.07%, ctx=365, majf=0, minf=1 00:11:41.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:41.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.411 issued rwts: total=2622,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.411 job2: (groupid=0, jobs=1): err= 0: pid=237002: Mon Dec 16 12:33:07 2024 00:11:41.411 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:11:41.411 slat (nsec): min=1193, max=20114k, avg=142091.57, stdev=1066863.09 00:11:41.411 clat (usec): min=5410, max=70045, avg=17064.81, stdev=9160.98 00:11:41.411 lat (usec): min=5421, max=70048, avg=17206.90, stdev=9252.57 00:11:41.411 clat percentiles (usec): 00:11:41.411 | 1.00th=[ 6587], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:11:41.411 | 30.00th=[11863], 40.00th=[12256], 50.00th=[13304], 60.00th=[14746], 00:11:41.411 | 70.00th=[19268], 80.00th=[21365], 90.00th=[30278], 95.00th=[37487], 00:11:41.411 | 99.00th=[55837], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:11:41.411 | 99.99th=[69731] 00:11:41.411 write: IOPS=4053, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:11:41.411 slat (usec): min=2, max=24311, avg=103.96, stdev=737.30 00:11:41.411 clat (usec): min=247, max=70044, avg=16307.26, stdev=8766.14 00:11:41.411 lat (usec): min=676, max=70048, avg=16411.22, stdev=8823.10 00:11:41.411 clat percentiles (usec): 00:11:41.411 | 1.00th=[ 2343], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 9765], 00:11:41.412 | 30.00th=[11076], 40.00th=[11731], 50.00th=[18482], 60.00th=[20579], 00:11:41.412 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[26870], 00:11:41.412 | 99.00th=[58983], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:11:41.412 | 99.99th=[69731] 00:11:41.412 bw ( KiB/s): min=12263, max=19448, per=22.32%, avg=15855.50, stdev=5080.56, samples=2 00:11:41.412 iops : min= 3065, max= 4862, avg=3963.50, stdev=1270.67, samples=2 00:11:41.412 lat (usec) : 250=0.01%, 750=0.07%, 1000=0.04% 00:11:41.412 lat (msec) : 2=0.26%, 4=0.91%, 10=12.39%, 20=50.38%, 50=34.71% 00:11:41.412 lat (msec) : 100=1.24% 00:11:41.412 cpu : usr=3.37%, sys=5.05%, ctx=385, majf=0, minf=1 00:11:41.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:41.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.412 issued rwts: total=3584,4094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.412 job3: (groupid=0, jobs=1): err= 0: pid=237003: Mon Dec 16 12:33:07 2024 00:11:41.412 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:11:41.412 slat (nsec): min=1341, max=11081k, avg=96136.62, stdev=699388.20 00:11:41.412 clat (usec): min=4093, max=21832, avg=12184.74, stdev=2892.34 00:11:41.412 lat (usec): min=4099, max=21923, avg=12280.88, stdev=2943.76 00:11:41.412 clat percentiles (usec): 00:11:41.412 | 1.00th=[ 4752], 5.00th=[ 8225], 10.00th=[ 9765], 20.00th=[10552], 00:11:41.412 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:11:41.412 | 70.00th=[12780], 80.00th=[13960], 90.00th=[16450], 95.00th=[18482], 00:11:41.412 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21627], 99.95th=[21890], 00:11:41.412 | 99.99th=[21890] 00:11:41.412 write: IOPS=5216, BW=20.4MiB/s (21.4MB/s)(20.6MiB/1009msec); 0 zone resets 00:11:41.412 slat (usec): min=2, max=11160, avg=88.53, stdev=488.39 00:11:41.412 clat (usec): min=1496, max=63460, avg=12424.44, stdev=8640.50 00:11:41.412 lat (usec): min=1519, max=63469, avg=12512.96, stdev=8703.97 00:11:41.412 clat percentiles (usec): 00:11:41.412 | 1.00th=[ 3523], 5.00th=[ 5145], 10.00th=[ 6915], 20.00th=[ 8848], 00:11:41.412 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11338], 00:11:41.412 | 70.00th=[11469], 80.00th=[11600], 90.00th=[15401], 95.00th=[30278], 00:11:41.412 | 99.00th=[55313], 99.50th=[61080], 99.90th=[63177], 99.95th=[63701], 00:11:41.412 | 99.99th=[63701] 00:11:41.412 bw ( KiB/s): min=16512, max=24526, per=28.89%, avg=20519.00, stdev=5666.75, samples=2 00:11:41.412 iops : min= 4128, max= 6131, avg=5129.50, stdev=1416.33, samples=2 00:11:41.412 lat (msec) : 2=0.02%, 4=1.16%, 10=19.02%, 20=74.86%, 50=3.79% 00:11:41.412 lat (msec) : 100=1.15% 00:11:41.412 cpu : usr=3.37%, sys=5.95%, ctx=624, majf=0, minf=1 00:11:41.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:41.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.412 issued rwts: total=5120,5263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.412 00:11:41.412 Run status group 0 (all jobs): 00:11:41.412 READ: bw=65.0MiB/s (68.1MB/s), 10.1MiB/s-22.7MiB/s (10.6MB/s-23.8MB/s), io=68.0MiB (71.3MB), run=1009-1046msec 00:11:41.412 WRITE: bw=69.4MiB/s (72.7MB/s), 11.9MiB/s-22.9MiB/s (12.4MB/s-24.1MB/s), io=72.6MiB (76.1MB), run=1009-1046msec 00:11:41.412 00:11:41.412 Disk stats (read/write): 00:11:41.412 nvme0n1: ios=5170/5367, merge=0/0, ticks=54095/49729, in_queue=103824, util=85.87% 00:11:41.412 nvme0n2: ios=2098/2519, merge=0/0, ticks=40183/63596, in_queue=103779, util=89.82% 00:11:41.412 nvme0n3: ios=2645/3072, merge=0/0, ticks=49132/55317, in_queue=104449, util=92.86% 00:11:41.412 nvme0n4: ios=4636/4847, merge=0/0, ticks=54305/48597, in_queue=102902, util=94.07% 00:11:41.412 12:33:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:41.412 [global] 00:11:41.412 thread=1 00:11:41.412 invalidate=1 00:11:41.412 rw=randwrite 00:11:41.412 time_based=1 00:11:41.412 runtime=1 00:11:41.412 ioengine=libaio 00:11:41.412 direct=1 00:11:41.412 bs=4096 00:11:41.412 iodepth=128 00:11:41.412 norandommap=0 00:11:41.412 numjobs=1 00:11:41.412 00:11:41.412 verify_dump=1 00:11:41.412 verify_backlog=512 00:11:41.412 verify_state_save=0 00:11:41.412 do_verify=1 00:11:41.412 verify=crc32c-intel 00:11:41.412 [job0] 00:11:41.412 filename=/dev/nvme0n1 00:11:41.412 [job1] 00:11:41.412 filename=/dev/nvme0n2 00:11:41.412 [job2] 00:11:41.412 filename=/dev/nvme0n3 00:11:41.412 [job3] 00:11:41.412 filename=/dev/nvme0n4 00:11:41.412 Could not set queue depth (nvme0n1) 00:11:41.412 Could not set queue depth (nvme0n2) 00:11:41.412 Could not set queue depth (nvme0n3) 00:11:41.412 Could not set queue depth (nvme0n4) 00:11:41.412 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.412 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.412 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.412 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:41.412 fio-3.35 00:11:41.412 Starting 4 threads 00:11:42.785 00:11:42.785 job0: (groupid=0, jobs=1): err= 0: pid=237692: Mon Dec 16 12:33:08 2024 00:11:42.785 read: IOPS=2079, BW=8318KiB/s (8517kB/s)(8376KiB/1007msec) 00:11:42.785 slat (nsec): min=1732, max=28027k, avg=190576.40, stdev=1322846.93 00:11:42.785 clat (usec): min=2897, max=77056, avg=25082.93, stdev=10693.24 00:11:42.785 lat (usec): min=10693, max=84740, avg=25273.51, stdev=10806.71 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[10945], 5.00th=[13435], 10.00th=[13960], 20.00th=[17433], 00:11:42.785 | 30.00th=[19530], 40.00th=[20317], 50.00th=[21365], 60.00th=[25822], 00:11:42.785 | 70.00th=[26608], 80.00th=[32113], 90.00th=[41681], 95.00th=[44827], 00:11:42.785 | 99.00th=[64226], 99.50th=[67634], 99.90th=[77071], 99.95th=[77071], 00:11:42.785 | 99.99th=[77071] 00:11:42.785 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:42.785 slat (usec): min=2, max=14736, avg=228.80, stdev=1097.03 00:11:42.785 clat (usec): min=11154, max=76089, avg=29110.12, stdev=14286.78 00:11:42.785 lat (usec): min=11164, max=76101, avg=29338.93, stdev=14377.95 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[14746], 5.00th=[17957], 10.00th=[19268], 20.00th=[20055], 00:11:42.785 | 30.00th=[20841], 40.00th=[21103], 50.00th=[22938], 60.00th=[25297], 00:11:42.785 | 70.00th=[28967], 80.00th=[34866], 90.00th=[49021], 95.00th=[67634], 00:11:42.785 | 99.00th=[73925], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:11:42.785 | 99.99th=[76022] 00:11:42.785 bw ( KiB/s): min= 9656, max=10168, per=14.66%, avg=9912.00, stdev=362.04, samples=2 00:11:42.785 iops : min= 2414, max= 2542, avg=2478.00, stdev=90.51, samples=2 00:11:42.785 lat (msec) : 4=0.02%, 20=26.62%, 50=66.50%, 100=6.85% 00:11:42.785 cpu : usr=2.58%, sys=3.08%, ctx=265, majf=0, minf=1 00:11:42.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:42.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.785 issued rwts: total=2094,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.785 job1: (groupid=0, jobs=1): err= 0: pid=237708: Mon Dec 16 12:33:08 2024 00:11:42.785 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:11:42.785 slat (nsec): min=1246, max=23745k, avg=87949.58, stdev=698306.31 00:11:42.785 clat (usec): min=1761, max=47666, avg=12020.57, stdev=4778.93 00:11:42.785 lat (usec): min=1771, max=47670, avg=12108.52, stdev=4833.30 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[ 5473], 5.00th=[ 7308], 10.00th=[ 8356], 20.00th=[ 9634], 00:11:42.785 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[11600], 00:11:42.785 | 70.00th=[12256], 80.00th=[14353], 90.00th=[16057], 95.00th=[20055], 00:11:42.785 | 99.00th=[34341], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:11:42.785 | 99.99th=[47449] 00:11:42.785 write: IOPS=5246, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1010msec); 0 zone resets 00:11:42.785 slat (nsec): min=1886, max=9663.7k, avg=78235.25, stdev=460255.53 00:11:42.785 clat (usec): min=407, max=47277, avg=12505.93, stdev=9445.38 00:11:42.785 lat (usec): min=942, max=47284, avg=12584.16, stdev=9500.53 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[ 1975], 5.00th=[ 3294], 10.00th=[ 5014], 20.00th=[ 7242], 00:11:42.785 | 30.00th=[ 8356], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10159], 00:11:42.785 | 70.00th=[10290], 80.00th=[12387], 90.00th=[27132], 95.00th=[38011], 00:11:42.785 | 99.00th=[43779], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:11:42.785 | 99.99th=[47449] 00:11:42.785 bw ( KiB/s): min=18984, max=22384, per=30.59%, avg=20684.00, stdev=2404.16, samples=2 00:11:42.785 iops : min= 4746, max= 5596, avg=5171.00, stdev=601.04, samples=2 00:11:42.785 lat (usec) : 500=0.01%, 1000=0.01% 00:11:42.785 lat (msec) : 2=0.65%, 4=2.40%, 10=38.48%, 20=47.98%, 50=10.47% 00:11:42.785 cpu : usr=3.96%, sys=6.44%, ctx=506, majf=0, minf=1 00:11:42.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:42.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.785 issued rwts: total=5120,5299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.785 job2: (groupid=0, jobs=1): err= 0: pid=237713: Mon Dec 16 12:33:08 2024 00:11:42.785 read: IOPS=6213, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1007msec) 00:11:42.785 slat (nsec): min=1296, max=10167k, avg=85345.65, stdev=623268.85 00:11:42.785 clat (usec): min=2258, max=21304, avg=10602.54, stdev=2779.24 00:11:42.785 lat (usec): min=3560, max=21315, avg=10687.89, stdev=2821.87 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 8160], 20.00th=[ 8717], 00:11:42.785 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10814], 00:11:42.785 | 70.00th=[11338], 80.00th=[12125], 90.00th=[14615], 95.00th=[16712], 00:11:42.785 | 99.00th=[19006], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:11:42.785 | 99.99th=[21365] 00:11:42.785 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:11:42.785 slat (usec): min=2, max=9359, avg=63.75, stdev=381.96 00:11:42.785 clat (usec): min=1660, max=31013, avg=9211.32, stdev=2877.98 00:11:42.785 lat (usec): min=1673, max=31016, avg=9275.08, stdev=2908.44 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[ 2999], 5.00th=[ 4359], 10.00th=[ 6063], 20.00th=[ 7635], 00:11:42.785 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:42.785 | 70.00th=[10421], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:11:42.785 | 99.00th=[19792], 99.50th=[28705], 99.90th=[30540], 99.95th=[31065], 00:11:42.785 | 99.99th=[31065] 00:11:42.785 bw ( KiB/s): min=24576, max=28368, per=39.15%, avg=26472.00, stdev=2681.35, samples=2 00:11:42.785 iops : min= 6144, max= 7092, avg=6618.00, stdev=670.34, samples=2 00:11:42.785 lat (msec) : 2=0.11%, 4=2.08%, 10=56.69%, 20=40.39%, 50=0.74% 00:11:42.785 cpu : usr=4.87%, sys=6.86%, ctx=668, majf=0, minf=2 00:11:42.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:42.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.785 issued rwts: total=6257,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.785 job3: (groupid=0, jobs=1): err= 0: pid=237714: Mon Dec 16 12:33:08 2024 00:11:42.785 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(9.97MiB/1007msec) 00:11:42.785 slat (nsec): min=1425, max=17117k, avg=185214.34, stdev=1170858.61 00:11:42.785 clat (usec): min=3696, max=80008, avg=20563.23, stdev=9201.62 00:11:42.785 lat (usec): min=8089, max=80016, avg=20748.44, stdev=9345.35 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[10814], 5.00th=[12518], 10.00th=[12911], 20.00th=[14877], 00:11:42.785 | 30.00th=[15795], 40.00th=[17433], 50.00th=[18220], 60.00th=[20317], 00:11:42.785 | 70.00th=[20841], 80.00th=[22676], 90.00th=[33424], 95.00th=[36963], 00:11:42.785 | 99.00th=[64226], 99.50th=[74974], 99.90th=[80217], 99.95th=[80217], 00:11:42.785 | 99.99th=[80217] 00:11:42.785 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:42.785 slat (usec): min=2, max=22402, avg=200.13, stdev=1014.40 00:11:42.785 clat (usec): min=5462, max=94166, avg=29308.71, stdev=17147.16 00:11:42.785 lat (usec): min=5478, max=94174, avg=29508.83, stdev=17235.98 00:11:42.785 clat percentiles (usec): 00:11:42.785 | 1.00th=[10421], 5.00th=[11863], 10.00th=[15401], 20.00th=[19530], 00:11:42.785 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21890], 60.00th=[24511], 00:11:42.785 | 70.00th=[28967], 80.00th=[35914], 90.00th=[62129], 95.00th=[68682], 00:11:42.785 | 99.00th=[87557], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:11:42.785 | 99.99th=[93848] 00:11:42.785 bw ( KiB/s): min= 7624, max=12856, per=15.14%, avg=10240.00, stdev=3699.58, samples=2 00:11:42.785 iops : min= 1906, max= 3214, avg=2560.00, stdev=924.90, samples=2 00:11:42.785 lat (msec) : 4=0.02%, 10=0.37%, 20=40.71%, 50=50.94%, 100=7.96% 00:11:42.785 cpu : usr=1.49%, sys=4.37%, ctx=273, majf=0, minf=2 00:11:42.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:42.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.785 issued rwts: total=2552,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.785 00:11:42.785 Run status group 0 (all jobs): 00:11:42.785 READ: bw=62.0MiB/s (65.0MB/s), 8318KiB/s-24.3MiB/s (8517kB/s-25.5MB/s), io=62.6MiB (65.6MB), run=1007-1010msec 00:11:42.785 WRITE: bw=66.0MiB/s (69.2MB/s), 9.93MiB/s-25.8MiB/s (10.4MB/s-27.1MB/s), io=66.7MiB (69.9MB), run=1007-1010msec 00:11:42.785 00:11:42.785 Disk stats (read/write): 00:11:42.785 nvme0n1: ios=1876/2048, merge=0/0, ticks=21818/30470, in_queue=52288, util=98.00% 00:11:42.785 nvme0n2: ios=4250/4608, merge=0/0, ticks=47807/54392, in_queue=102199, util=97.25% 00:11:42.785 nvme0n3: ios=5173/5511, merge=0/0, ticks=53324/49812, in_queue=103136, util=89.81% 00:11:42.785 nvme0n4: ios=2054/2495, merge=0/0, ticks=18841/31869, in_queue=50710, util=89.70% 00:11:42.785 12:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:42.785 12:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=237984 00:11:42.785 12:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:42.785 12:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:42.785 [global] 00:11:42.785 thread=1 00:11:42.785 invalidate=1 00:11:42.785 rw=read 00:11:42.785 time_based=1 00:11:42.785 runtime=10 00:11:42.785 ioengine=libaio 00:11:42.785 direct=1 00:11:42.785 bs=4096 00:11:42.785 iodepth=1 00:11:42.785 norandommap=1 00:11:42.785 numjobs=1 00:11:42.785 00:11:42.785 [job0] 00:11:42.785 filename=/dev/nvme0n1 00:11:42.785 [job1] 00:11:42.785 filename=/dev/nvme0n2 00:11:42.785 [job2] 00:11:42.786 filename=/dev/nvme0n3 00:11:42.786 [job3] 00:11:42.786 filename=/dev/nvme0n4 00:11:42.786 Could not set queue depth (nvme0n1) 00:11:42.786 Could not set queue depth (nvme0n2) 00:11:42.786 Could not set queue depth (nvme0n3) 00:11:42.786 Could not set queue depth (nvme0n4) 00:11:43.043 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.043 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.043 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.043 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.043 fio-3.35 00:11:43.043 Starting 4 threads 00:11:46.321 12:33:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:46.321 12:33:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:46.321 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11112448, buflen=4096 00:11:46.321 fio: pid=238130, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.321 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43520000, buflen=4096 00:11:46.321 fio: pid=238129, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.321 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.321 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:46.321 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6000640, buflen=4096 00:11:46.321 fio: pid=238127, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.578 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.578 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:46.578 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57593856, buflen=4096 00:11:46.578 fio: pid=238128, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:46.578 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.578 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:46.578 00:11:46.579 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=238127: Mon Dec 16 12:33:12 2024 00:11:46.579 read: IOPS=463, BW=1852KiB/s (1896kB/s)(5860KiB/3165msec) 00:11:46.579 slat (usec): min=6, max=29788, avg=45.98, stdev=909.06 00:11:46.579 clat (usec): min=168, max=42020, avg=2095.02, stdev=8533.65 00:11:46.579 lat (usec): min=175, max=70926, avg=2141.03, stdev=8673.62 00:11:46.579 clat percentiles (usec): 00:11:46.579 | 1.00th=[ 176], 5.00th=[ 198], 10.00th=[ 217], 20.00th=[ 227], 00:11:46.579 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:11:46.579 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 343], 00:11:46.579 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:46.579 | 99.99th=[42206] 00:11:46.579 bw ( KiB/s): min= 96, max= 7737, per=4.33%, avg=1482.83, stdev=3075.95, samples=6 00:11:46.579 iops : min= 24, max= 1934, avg=370.67, stdev=768.89, samples=6 00:11:46.579 lat (usec) : 250=64.80%, 500=30.56%, 750=0.07% 00:11:46.579 lat (msec) : 50=4.50% 00:11:46.579 cpu : usr=0.16%, sys=0.44%, ctx=1472, majf=0, minf=1 00:11:46.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 issued rwts: total=1466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.579 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=238128: Mon Dec 16 12:33:12 2024 00:11:46.579 read: IOPS=4165, BW=16.3MiB/s (17.1MB/s)(54.9MiB/3376msec) 00:11:46.579 slat (usec): min=5, max=20810, avg=12.79, stdev=290.33 00:11:46.579 clat (usec): min=158, max=9915, avg=224.63, stdev=99.55 00:11:46.579 lat (usec): min=165, max=21126, avg=237.41, stdev=309.61 00:11:46.579 clat percentiles (usec): 00:11:46.579 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:11:46.579 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:11:46.579 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:11:46.579 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 433], 99.95th=[ 734], 00:11:46.579 | 99.99th=[ 3949] 00:11:46.579 bw ( KiB/s): min=14272, max=18488, per=49.21%, avg=16830.17, stdev=1694.12, samples=6 00:11:46.579 iops : min= 3568, max= 4622, avg=4207.50, stdev=423.56, samples=6 00:11:46.579 lat (usec) : 250=78.48%, 500=21.45%, 750=0.01%, 1000=0.01% 00:11:46.579 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01% 00:11:46.579 cpu : usr=0.86%, sys=3.94%, ctx=14068, majf=0, minf=2 00:11:46.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 issued rwts: total=14062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.579 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=238129: Mon Dec 16 12:33:12 2024 00:11:46.579 read: IOPS=3641, BW=14.2MiB/s (14.9MB/s)(41.5MiB/2918msec) 00:11:46.579 slat (usec): min=6, max=15412, avg= 9.93, stdev=187.27 00:11:46.579 clat (usec): min=175, max=40955, avg=262.66, stdev=763.47 00:11:46.579 lat (usec): min=183, max=40978, avg=272.59, stdev=786.57 00:11:46.579 clat percentiles (usec): 00:11:46.579 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:11:46.579 | 30.00th=[ 227], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:11:46.579 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 281], 00:11:46.579 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 486], 99.95th=[ 693], 00:11:46.579 | 99.99th=[40633] 00:11:46.579 bw ( KiB/s): min=10808, max=15480, per=42.47%, avg=14526.40, stdev=2078.84, samples=5 00:11:46.579 iops : min= 2702, max= 3870, avg=3631.60, stdev=519.71, samples=5 00:11:46.579 lat (usec) : 250=37.54%, 500=62.37%, 750=0.04%, 1000=0.01% 00:11:46.579 lat (msec) : 50=0.04% 00:11:46.579 cpu : usr=1.06%, sys=3.50%, ctx=10628, majf=0, minf=2 00:11:46.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 issued rwts: total=10626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.579 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=238130: Mon Dec 16 12:33:12 2024 00:11:46.579 read: IOPS=989, BW=3955KiB/s (4050kB/s)(10.6MiB/2744msec) 00:11:46.579 slat (nsec): min=6738, max=36047, avg=8133.18, stdev=2408.56 00:11:46.579 clat (usec): min=198, max=42246, avg=993.94, stdev=5381.77 00:11:46.579 lat (usec): min=207, max=42255, avg=1002.07, stdev=5382.78 00:11:46.579 clat percentiles (usec): 00:11:46.579 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:11:46.579 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 269], 00:11:46.579 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:11:46.579 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:46.579 | 99.99th=[42206] 00:11:46.579 bw ( KiB/s): min= 224, max=14144, per=12.63%, avg=4320.00, stdev=5828.39, samples=5 00:11:46.579 iops : min= 56, max= 3536, avg=1080.00, stdev=1457.10, samples=5 00:11:46.579 lat (usec) : 250=12.64%, 500=85.22%, 750=0.18%, 1000=0.04% 00:11:46.579 lat (msec) : 4=0.07%, 10=0.04%, 50=1.77% 00:11:46.579 cpu : usr=0.22%, sys=1.09%, ctx=2715, majf=0, minf=2 00:11:46.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.579 issued rwts: total=2714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.579 00:11:46.579 Run status group 0 (all jobs): 00:11:46.579 READ: bw=33.4MiB/s (35.0MB/s), 1852KiB/s-16.3MiB/s (1896kB/s-17.1MB/s), io=113MiB (118MB), run=2744-3376msec 00:11:46.579 00:11:46.579 Disk stats (read/write): 00:11:46.579 nvme0n1: ios=1336/0, merge=0/0, ticks=3501/0, in_queue=3501, util=97.63% 00:11:46.579 nvme0n2: ios=14033/0, merge=0/0, ticks=3085/0, in_queue=3085, util=94.05% 00:11:46.579 nvme0n3: ios=10352/0, merge=0/0, ticks=2668/0, in_queue=2668, util=95.57% 00:11:46.579 nvme0n4: ios=2750/0, merge=0/0, ticks=3236/0, in_queue=3236, util=99.48% 00:11:46.837 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.837 12:33:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:47.094 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:47.094 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:47.351 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:47.351 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:47.351 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:47.351 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:47.608 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:47.608 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 237984 00:11:47.608 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:47.608 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:47.866 nvmf hotplug test: fio failed as expected 00:11:47.866 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:48.124 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.125 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:48.125 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.125 12:33:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.125 rmmod nvme_tcp 00:11:48.125 rmmod nvme_fabrics 00:11:48.125 rmmod nvme_keyring 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 234681 ']' 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 234681 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 234681 ']' 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 234681 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234681 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234681' 00:11:48.125 killing process with pid 234681 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 234681 00:11:48.125 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 234681 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.384 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.292 00:11:50.292 real 0m26.935s 00:11:50.292 user 1m46.798s 00:11:50.292 sys 0m8.797s 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.292 ************************************ 00:11:50.292 END TEST nvmf_fio_target 00:11:50.292 ************************************ 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.292 12:33:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:50.552 ************************************ 00:11:50.552 START TEST nvmf_bdevio 00:11:50.552 ************************************ 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:50.552 * Looking for test storage... 00:11:50.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.552 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:50.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.553 --rc genhtml_branch_coverage=1 00:11:50.553 --rc genhtml_function_coverage=1 00:11:50.553 --rc genhtml_legend=1 00:11:50.553 --rc geninfo_all_blocks=1 00:11:50.553 --rc geninfo_unexecuted_blocks=1 00:11:50.553 00:11:50.553 ' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:50.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.553 --rc genhtml_branch_coverage=1 00:11:50.553 --rc genhtml_function_coverage=1 00:11:50.553 --rc genhtml_legend=1 00:11:50.553 --rc geninfo_all_blocks=1 00:11:50.553 --rc geninfo_unexecuted_blocks=1 00:11:50.553 00:11:50.553 ' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:50.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.553 --rc genhtml_branch_coverage=1 00:11:50.553 --rc genhtml_function_coverage=1 00:11:50.553 --rc genhtml_legend=1 00:11:50.553 --rc geninfo_all_blocks=1 00:11:50.553 --rc geninfo_unexecuted_blocks=1 00:11:50.553 00:11:50.553 ' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:50.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.553 --rc genhtml_branch_coverage=1 00:11:50.553 --rc genhtml_function_coverage=1 00:11:50.553 --rc genhtml_legend=1 00:11:50.553 --rc geninfo_all_blocks=1 00:11:50.553 --rc geninfo_unexecuted_blocks=1 00:11:50.553 00:11:50.553 ' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:50.553 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.554 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.127 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:57.128 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:57.128 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:57.128 Found net devices under 0000:af:00.0: cvl_0_0 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:57.128 Found net devices under 0000:af:00.1: cvl_0_1 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:11:57.128 00:11:57.128 --- 10.0.0.2 ping statistics --- 00:11:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.128 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:57.128 00:11:57.128 --- 10.0.0.1 ping statistics --- 00:11:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.128 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=242551 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 242551 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 242551 ']' 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.128 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 [2024-12-16 12:33:22.692349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:57.129 [2024-12-16 12:33:22.692402] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.129 [2024-12-16 12:33:22.766446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.129 [2024-12-16 12:33:22.805701] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.129 [2024-12-16 12:33:22.805740] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.129 [2024-12-16 12:33:22.805747] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.129 [2024-12-16 12:33:22.805753] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.129 [2024-12-16 12:33:22.805758] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.129 [2024-12-16 12:33:22.805874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:57.129 [2024-12-16 12:33:22.805982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:57.129 [2024-12-16 12:33:22.806016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.129 [2024-12-16 12:33:22.806017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 [2024-12-16 12:33:22.956967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 Malloc0 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.129 12:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.129 [2024-12-16 12:33:23.004194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:57.129 { 00:11:57.129 "params": { 00:11:57.129 "name": "Nvme$subsystem", 00:11:57.129 "trtype": "$TEST_TRANSPORT", 00:11:57.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:57.129 "adrfam": "ipv4", 00:11:57.129 "trsvcid": "$NVMF_PORT", 00:11:57.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:57.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:57.129 "hdgst": ${hdgst:-false}, 00:11:57.129 "ddgst": ${ddgst:-false} 00:11:57.129 }, 00:11:57.129 "method": "bdev_nvme_attach_controller" 00:11:57.129 } 00:11:57.129 EOF 00:11:57.129 )") 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:57.129 12:33:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:57.129 "params": { 00:11:57.129 "name": "Nvme1", 00:11:57.129 "trtype": "tcp", 00:11:57.129 "traddr": "10.0.0.2", 00:11:57.129 "adrfam": "ipv4", 00:11:57.129 "trsvcid": "4420", 00:11:57.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:57.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:57.129 "hdgst": false, 00:11:57.129 "ddgst": false 00:11:57.129 }, 00:11:57.129 "method": "bdev_nvme_attach_controller" 00:11:57.129 }' 00:11:57.129 [2024-12-16 12:33:23.056677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:57.129 [2024-12-16 12:33:23.056717] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242576 ] 00:11:57.129 [2024-12-16 12:33:23.123727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.129 [2024-12-16 12:33:23.164353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.129 [2024-12-16 12:33:23.164463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.129 [2024-12-16 12:33:23.164464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.695 I/O targets: 00:11:57.695 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:57.695 00:11:57.695 00:11:57.695 CUnit - A unit testing framework for C - Version 2.1-3 00:11:57.695 http://cunit.sourceforge.net/ 00:11:57.695 00:11:57.695 00:11:57.695 Suite: bdevio tests on: Nvme1n1 00:11:57.695 Test: blockdev write read block ...passed 00:11:57.695 Test: blockdev write zeroes read block ...passed 00:11:57.695 Test: blockdev write zeroes read no split ...passed 00:11:57.695 Test: blockdev write zeroes read split ...passed 00:11:57.695 Test: blockdev write zeroes read split partial ...passed 00:11:57.695 Test: blockdev reset ...[2024-12-16 12:33:23.627069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:57.695 [2024-12-16 12:33:23.627134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6c90 (9): Bad file descriptor 00:11:57.695 [2024-12-16 12:33:23.682353] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:57.695 passed 00:11:57.695 Test: blockdev write read 8 blocks ...passed 00:11:57.695 Test: blockdev write read size > 128k ...passed 00:11:57.695 Test: blockdev write read invalid size ...passed 00:11:57.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:57.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:57.952 Test: blockdev write read max offset ...passed 00:11:57.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:57.953 Test: blockdev writev readv 8 blocks ...passed 00:11:57.953 Test: blockdev writev readv 30 x 1block ...passed 00:11:57.953 Test: blockdev writev readv block ...passed 00:11:57.953 Test: blockdev writev readv size > 128k ...passed 00:11:57.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:57.953 Test: blockdev comparev and writev ...[2024-12-16 12:33:23.937766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.937807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.937821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.937828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.938073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.938083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.938094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.938101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.938332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.938357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.938364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.938590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:57.953 [2024-12-16 12:33:23.938612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.953 [2024-12-16 12:33:23.938619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:57.953 passed 00:11:58.212 Test: blockdev nvme passthru rw ...passed 00:11:58.212 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:33:24.020486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.212 [2024-12-16 12:33:24.020510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:58.212 [2024-12-16 12:33:24.020611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.212 [2024-12-16 12:33:24.020622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:58.212 [2024-12-16 12:33:24.020727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.212 [2024-12-16 12:33:24.020737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:58.212 [2024-12-16 12:33:24.020837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.212 [2024-12-16 12:33:24.020853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:58.212 passed 00:11:58.212 Test: blockdev nvme admin passthru ...passed 00:11:58.212 Test: blockdev copy ...passed 00:11:58.212 00:11:58.212 Run Summary: Type Total Ran Passed Failed Inactive 00:11:58.213 suites 1 1 n/a 0 0 00:11:58.213 tests 23 23 23 0 0 00:11:58.213 asserts 152 152 152 0 n/a 00:11:58.213 00:11:58.213 Elapsed time = 1.217 seconds 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.213 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.213 rmmod nvme_tcp 00:11:58.213 rmmod nvme_fabrics 00:11:58.472 rmmod nvme_keyring 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 242551 ']' 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 242551 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 242551 ']' 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 242551 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 242551 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 242551' 00:11:58.472 killing process with pid 242551 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 242551 00:11:58.472 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 242551 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.732 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.639 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.639 00:12:00.639 real 0m10.241s 00:12:00.639 user 0m11.123s 00:12:00.639 sys 0m4.948s 00:12:00.639 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.639 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:00.639 ************************************ 00:12:00.639 END TEST nvmf_bdevio 00:12:00.639 ************************************ 00:12:00.639 12:33:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:00.639 00:12:00.639 real 4m35.380s 00:12:00.639 user 10m22.847s 00:12:00.639 sys 1m34.887s 00:12:00.639 12:33:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.639 12:33:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:00.639 ************************************ 00:12:00.639 END TEST nvmf_target_core 00:12:00.639 ************************************ 00:12:00.899 12:33:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:00.899 12:33:26 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:00.899 12:33:26 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.899 12:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 ************************************ 00:12:00.899 START TEST nvmf_target_extra 00:12:00.899 ************************************ 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:00.899 * Looking for test storage... 00:12:00.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.899 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:00.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.900 --rc genhtml_branch_coverage=1 00:12:00.900 --rc genhtml_function_coverage=1 00:12:00.900 --rc genhtml_legend=1 00:12:00.900 --rc geninfo_all_blocks=1 00:12:00.900 --rc geninfo_unexecuted_blocks=1 00:12:00.900 00:12:00.900 ' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:00.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.900 --rc genhtml_branch_coverage=1 00:12:00.900 --rc genhtml_function_coverage=1 00:12:00.900 --rc genhtml_legend=1 00:12:00.900 --rc geninfo_all_blocks=1 00:12:00.900 --rc geninfo_unexecuted_blocks=1 00:12:00.900 00:12:00.900 ' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:00.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.900 --rc genhtml_branch_coverage=1 00:12:00.900 --rc genhtml_function_coverage=1 00:12:00.900 --rc genhtml_legend=1 00:12:00.900 --rc geninfo_all_blocks=1 00:12:00.900 --rc geninfo_unexecuted_blocks=1 00:12:00.900 00:12:00.900 ' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:00.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.900 --rc genhtml_branch_coverage=1 00:12:00.900 --rc genhtml_function_coverage=1 00:12:00.900 --rc genhtml_legend=1 00:12:00.900 --rc geninfo_all_blocks=1 00:12:00.900 --rc geninfo_unexecuted_blocks=1 00:12:00.900 00:12:00.900 ' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.900 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.161 ************************************ 00:12:01.161 START TEST nvmf_example 00:12:01.161 ************************************ 00:12:01.161 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:01.161 * Looking for test storage... 00:12:01.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.161 --rc genhtml_branch_coverage=1 00:12:01.161 --rc genhtml_function_coverage=1 00:12:01.161 --rc genhtml_legend=1 00:12:01.161 --rc geninfo_all_blocks=1 00:12:01.161 --rc geninfo_unexecuted_blocks=1 00:12:01.161 00:12:01.161 ' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.161 --rc genhtml_branch_coverage=1 00:12:01.161 --rc genhtml_function_coverage=1 00:12:01.161 --rc genhtml_legend=1 00:12:01.161 --rc geninfo_all_blocks=1 00:12:01.161 --rc geninfo_unexecuted_blocks=1 00:12:01.161 00:12:01.161 ' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.161 --rc genhtml_branch_coverage=1 00:12:01.161 --rc genhtml_function_coverage=1 00:12:01.161 --rc genhtml_legend=1 00:12:01.161 --rc geninfo_all_blocks=1 00:12:01.161 --rc geninfo_unexecuted_blocks=1 00:12:01.161 00:12:01.161 ' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.161 --rc genhtml_branch_coverage=1 00:12:01.161 --rc genhtml_function_coverage=1 00:12:01.161 --rc genhtml_legend=1 00:12:01.161 --rc geninfo_all_blocks=1 00:12:01.161 --rc geninfo_unexecuted_blocks=1 00:12:01.161 00:12:01.161 ' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.161 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.162 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.735 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:07.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:07.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:07.736 Found net devices under 0000:af:00.0: cvl_0_0 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:07.736 Found net devices under 0000:af:00.1: cvl_0_1 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.736 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:12:07.736 00:12:07.736 --- 10.0.0.2 ping statistics --- 00:12:07.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.736 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:07.736 00:12:07.736 --- 10.0.0.1 ping statistics --- 00:12:07.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.736 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=246384 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 246384 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 246384 ']' 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.736 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.994 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:08.252 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:20.484 Initializing NVMe Controllers 00:12:20.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:20.484 Initialization complete. Launching workers. 00:12:20.484 ======================================================== 00:12:20.484 Latency(us) 00:12:20.484 Device Information : IOPS MiB/s Average min max 00:12:20.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18375.90 71.78 3483.78 689.65 15710.72 00:12:20.484 ======================================================== 00:12:20.484 Total : 18375.90 71.78 3483.78 689.65 15710.72 00:12:20.484 00:12:20.484 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:20.484 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.485 rmmod nvme_tcp 00:12:20.485 rmmod nvme_fabrics 00:12:20.485 rmmod nvme_keyring 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 246384 ']' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 246384 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 246384 ']' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 246384 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 246384 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 246384' 00:12:20.485 killing process with pid 246384 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 246384 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 246384 00:12:20.485 nvmf threads initialize successfully 00:12:20.485 bdev subsystem init successfully 00:12:20.485 created a nvmf target service 00:12:20.485 create targets's poll groups done 00:12:20.485 all subsystems of target started 00:12:20.485 nvmf target is running 00:12:20.485 all subsystems of target stopped 00:12:20.485 destroy targets's poll groups done 00:12:20.485 destroyed the nvmf target service 00:12:20.485 bdev subsystem finish successfully 00:12:20.485 nvmf threads destroy successfully 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.485 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.744 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.744 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:20.744 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.744 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.004 00:12:21.005 real 0m19.828s 00:12:21.005 user 0m46.427s 00:12:21.005 sys 0m6.003s 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.005 ************************************ 00:12:21.005 END TEST nvmf_example 00:12:21.005 ************************************ 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.005 ************************************ 00:12:21.005 START TEST nvmf_filesystem 00:12:21.005 ************************************ 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:21.005 * Looking for test storage... 00:12:21.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:12:21.005 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.005 --rc genhtml_branch_coverage=1 00:12:21.005 --rc genhtml_function_coverage=1 00:12:21.005 --rc genhtml_legend=1 00:12:21.005 --rc geninfo_all_blocks=1 00:12:21.005 --rc geninfo_unexecuted_blocks=1 00:12:21.005 00:12:21.005 ' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.005 --rc genhtml_branch_coverage=1 00:12:21.005 --rc genhtml_function_coverage=1 00:12:21.005 --rc genhtml_legend=1 00:12:21.005 --rc geninfo_all_blocks=1 00:12:21.005 --rc geninfo_unexecuted_blocks=1 00:12:21.005 00:12:21.005 ' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.005 --rc genhtml_branch_coverage=1 00:12:21.005 --rc genhtml_function_coverage=1 00:12:21.005 --rc genhtml_legend=1 00:12:21.005 --rc geninfo_all_blocks=1 00:12:21.005 --rc geninfo_unexecuted_blocks=1 00:12:21.005 00:12:21.005 ' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.005 --rc genhtml_branch_coverage=1 00:12:21.005 --rc genhtml_function_coverage=1 00:12:21.005 --rc genhtml_legend=1 00:12:21.005 --rc geninfo_all_blocks=1 00:12:21.005 --rc geninfo_unexecuted_blocks=1 00:12:21.005 00:12:21.005 ' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:12:21.005 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:12:21.006 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:21.269 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:21.269 #define SPDK_CONFIG_H 00:12:21.269 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:21.269 #define SPDK_CONFIG_APPS 1 00:12:21.269 #define SPDK_CONFIG_ARCH native 00:12:21.269 #undef SPDK_CONFIG_ASAN 00:12:21.269 #undef SPDK_CONFIG_AVAHI 00:12:21.269 #undef SPDK_CONFIG_CET 00:12:21.269 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:21.269 #define SPDK_CONFIG_COVERAGE 1 00:12:21.269 #define SPDK_CONFIG_CROSS_PREFIX 00:12:21.269 #undef SPDK_CONFIG_CRYPTO 00:12:21.269 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:21.269 #undef SPDK_CONFIG_CUSTOMOCF 00:12:21.269 #undef SPDK_CONFIG_DAOS 00:12:21.269 #define SPDK_CONFIG_DAOS_DIR 00:12:21.269 #define SPDK_CONFIG_DEBUG 1 00:12:21.269 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:21.269 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:21.269 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:21.269 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:21.270 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:21.270 #undef SPDK_CONFIG_DPDK_UADK 00:12:21.270 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:21.270 #define SPDK_CONFIG_EXAMPLES 1 00:12:21.270 #undef SPDK_CONFIG_FC 00:12:21.270 #define SPDK_CONFIG_FC_PATH 00:12:21.270 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:21.270 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:21.270 #define SPDK_CONFIG_FSDEV 1 00:12:21.270 #undef SPDK_CONFIG_FUSE 00:12:21.270 #undef SPDK_CONFIG_FUZZER 00:12:21.270 #define SPDK_CONFIG_FUZZER_LIB 00:12:21.270 #undef SPDK_CONFIG_GOLANG 00:12:21.270 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:21.270 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:21.270 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:21.270 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:21.270 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:21.270 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:21.270 #undef SPDK_CONFIG_HAVE_LZ4 00:12:21.270 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:21.270 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:21.270 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:21.270 #define SPDK_CONFIG_IDXD 1 00:12:21.270 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:21.270 #undef SPDK_CONFIG_IPSEC_MB 00:12:21.270 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:21.270 #define SPDK_CONFIG_ISAL 1 00:12:21.270 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:21.270 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:21.270 #define SPDK_CONFIG_LIBDIR 00:12:21.270 #undef SPDK_CONFIG_LTO 00:12:21.270 #define SPDK_CONFIG_MAX_LCORES 128 00:12:21.270 #define SPDK_CONFIG_NVME_CUSE 1 00:12:21.270 #undef SPDK_CONFIG_OCF 00:12:21.270 #define SPDK_CONFIG_OCF_PATH 00:12:21.270 #define SPDK_CONFIG_OPENSSL_PATH 00:12:21.270 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:21.270 #define SPDK_CONFIG_PGO_DIR 00:12:21.270 #undef SPDK_CONFIG_PGO_USE 00:12:21.270 #define SPDK_CONFIG_PREFIX /usr/local 00:12:21.270 #undef SPDK_CONFIG_RAID5F 00:12:21.270 #undef SPDK_CONFIG_RBD 00:12:21.270 #define SPDK_CONFIG_RDMA 1 00:12:21.270 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:21.270 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:21.270 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:21.270 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:21.270 #define SPDK_CONFIG_SHARED 1 00:12:21.270 #undef SPDK_CONFIG_SMA 00:12:21.270 #define SPDK_CONFIG_TESTS 1 00:12:21.270 #undef SPDK_CONFIG_TSAN 00:12:21.270 #define SPDK_CONFIG_UBLK 1 00:12:21.270 #define SPDK_CONFIG_UBSAN 1 00:12:21.270 #undef SPDK_CONFIG_UNIT_TESTS 00:12:21.270 #undef SPDK_CONFIG_URING 00:12:21.270 #define SPDK_CONFIG_URING_PATH 00:12:21.270 #undef SPDK_CONFIG_URING_ZNS 00:12:21.270 #undef SPDK_CONFIG_USDT 00:12:21.270 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:21.270 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:21.270 #define SPDK_CONFIG_VFIO_USER 1 00:12:21.270 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:21.270 #define SPDK_CONFIG_VHOST 1 00:12:21.270 #define SPDK_CONFIG_VIRTIO 1 00:12:21.270 #undef SPDK_CONFIG_VTUNE 00:12:21.270 #define SPDK_CONFIG_VTUNE_DIR 00:12:21.270 #define SPDK_CONFIG_WERROR 1 00:12:21.270 #define SPDK_CONFIG_WPDK_DIR 00:12:21.270 #undef SPDK_CONFIG_XNVME 00:12:21.270 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:21.270 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:21.271 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 248782 ]] 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 248782 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.o96e7h 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.o96e7h/tests/target /tmp/spdk.o96e7h 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:21.272 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=88330018816 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=95552421888 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7222403072 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47766179840 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47776210944 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19087466496 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19110486016 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23019520 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47776026624 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47776210944 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=184320 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9555226624 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9555238912 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:21.273 * Looking for test storage... 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=88330018816 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9436995584 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.273 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.274 --rc genhtml_branch_coverage=1 00:12:21.274 --rc genhtml_function_coverage=1 00:12:21.274 --rc genhtml_legend=1 00:12:21.274 --rc geninfo_all_blocks=1 00:12:21.274 --rc geninfo_unexecuted_blocks=1 00:12:21.274 00:12:21.274 ' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.274 --rc genhtml_branch_coverage=1 00:12:21.274 --rc genhtml_function_coverage=1 00:12:21.274 --rc genhtml_legend=1 00:12:21.274 --rc geninfo_all_blocks=1 00:12:21.274 --rc geninfo_unexecuted_blocks=1 00:12:21.274 00:12:21.274 ' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.274 --rc genhtml_branch_coverage=1 00:12:21.274 --rc genhtml_function_coverage=1 00:12:21.274 --rc genhtml_legend=1 00:12:21.274 --rc geninfo_all_blocks=1 00:12:21.274 --rc geninfo_unexecuted_blocks=1 00:12:21.274 00:12:21.274 ' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.274 --rc genhtml_branch_coverage=1 00:12:21.274 --rc genhtml_function_coverage=1 00:12:21.274 --rc genhtml_legend=1 00:12:21.274 --rc geninfo_all_blocks=1 00:12:21.274 --rc geninfo_unexecuted_blocks=1 00:12:21.274 00:12:21.274 ' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.274 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.534 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:21.534 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:21.534 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:21.534 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:28.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:28.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:28.112 Found net devices under 0000:af:00.0: cvl_0_0 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:28.112 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:28.113 Found net devices under 0000:af:00.1: cvl_0_1 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.113 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:12:28.113 00:12:28.113 --- 10.0.0.2 ping statistics --- 00:12:28.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.113 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:12:28.113 00:12:28.113 --- 10.0.0.1 ping statistics --- 00:12:28.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.113 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 ************************************ 00:12:28.113 START TEST nvmf_filesystem_no_in_capsule 00:12:28.113 ************************************ 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=251966 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 251966 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 251966 ']' 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 [2024-12-16 12:33:53.326009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:28.113 [2024-12-16 12:33:53.326057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.113 [2024-12-16 12:33:53.397517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.113 [2024-12-16 12:33:53.438784] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.113 [2024-12-16 12:33:53.438823] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.113 [2024-12-16 12:33:53.438830] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.113 [2024-12-16 12:33:53.438836] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.113 [2024-12-16 12:33:53.438842] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.113 [2024-12-16 12:33:53.438902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.113 [2024-12-16 12:33:53.439008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.113 [2024-12-16 12:33:53.439133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.113 [2024-12-16 12:33:53.439133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 [2024-12-16 12:33:53.575642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 Malloc1 00:12:28.113 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.114 [2024-12-16 12:33:53.727239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:28.114 { 00:12:28.114 "name": "Malloc1", 00:12:28.114 "aliases": [ 00:12:28.114 "d55261df-3999-49d8-b968-032ebc7e2897" 00:12:28.114 ], 00:12:28.114 "product_name": "Malloc disk", 00:12:28.114 "block_size": 512, 00:12:28.114 "num_blocks": 1048576, 00:12:28.114 "uuid": "d55261df-3999-49d8-b968-032ebc7e2897", 00:12:28.114 "assigned_rate_limits": { 00:12:28.114 "rw_ios_per_sec": 0, 00:12:28.114 "rw_mbytes_per_sec": 0, 00:12:28.114 "r_mbytes_per_sec": 0, 00:12:28.114 "w_mbytes_per_sec": 0 00:12:28.114 }, 00:12:28.114 "claimed": true, 00:12:28.114 "claim_type": "exclusive_write", 00:12:28.114 "zoned": false, 00:12:28.114 "supported_io_types": { 00:12:28.114 "read": true, 00:12:28.114 "write": true, 00:12:28.114 "unmap": true, 00:12:28.114 "flush": true, 00:12:28.114 "reset": true, 00:12:28.114 "nvme_admin": false, 00:12:28.114 "nvme_io": false, 00:12:28.114 "nvme_io_md": false, 00:12:28.114 "write_zeroes": true, 00:12:28.114 "zcopy": true, 00:12:28.114 "get_zone_info": false, 00:12:28.114 "zone_management": false, 00:12:28.114 "zone_append": false, 00:12:28.114 "compare": false, 00:12:28.114 "compare_and_write": false, 00:12:28.114 "abort": true, 00:12:28.114 "seek_hole": false, 00:12:28.114 "seek_data": false, 00:12:28.114 "copy": true, 00:12:28.114 "nvme_iov_md": false 00:12:28.114 }, 00:12:28.114 "memory_domains": [ 00:12:28.114 { 00:12:28.114 "dma_device_id": "system", 00:12:28.114 "dma_device_type": 1 00:12:28.114 }, 00:12:28.114 { 00:12:28.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.114 "dma_device_type": 2 00:12:28.114 } 00:12:28.114 ], 00:12:28.114 "driver_specific": {} 00:12:28.114 } 00:12:28.114 ]' 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:28.114 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.046 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.046 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.046 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.046 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:29.046 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:31.567 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:31.825 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:32.756 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.757 ************************************ 00:12:32.757 START TEST filesystem_ext4 00:12:32.757 ************************************ 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:32.757 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:32.757 mke2fs 1.47.0 (5-Feb-2023) 00:12:33.014 Discarding device blocks: 0/522240 done 00:12:33.014 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:33.014 Filesystem UUID: f4ed5ad2-92a3-4784-abf4-6433fe8b4336 00:12:33.014 Superblock backups stored on blocks: 00:12:33.014 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:33.014 00:12:33.014 Allocating group tables: 0/64 done 00:12:33.014 Writing inode tables: 0/64 done 00:12:33.580 Creating journal (8192 blocks): done 00:12:34.093 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:12:34.093 00:12:34.093 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:34.093 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 251966 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:39.351 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:39.609 00:12:39.609 real 0m6.694s 00:12:39.609 user 0m0.021s 00:12:39.609 sys 0m0.120s 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:39.609 ************************************ 00:12:39.609 END TEST filesystem_ext4 00:12:39.609 ************************************ 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.609 ************************************ 00:12:39.609 START TEST filesystem_btrfs 00:12:39.609 ************************************ 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:39.609 btrfs-progs v6.8.1 00:12:39.609 See https://btrfs.readthedocs.io for more information. 00:12:39.609 00:12:39.609 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:39.609 NOTE: several default settings have changed in version 5.15, please make sure 00:12:39.609 this does not affect your deployments: 00:12:39.609 - DUP for metadata (-m dup) 00:12:39.609 - enabled no-holes (-O no-holes) 00:12:39.609 - enabled free-space-tree (-R free-space-tree) 00:12:39.609 00:12:39.609 Label: (null) 00:12:39.609 UUID: 9420c231-3bb5-48b9-98c8-27821b3bf0f1 00:12:39.609 Node size: 16384 00:12:39.609 Sector size: 4096 (CPU page size: 4096) 00:12:39.609 Filesystem size: 510.00MiB 00:12:39.609 Block group profiles: 00:12:39.609 Data: single 8.00MiB 00:12:39.609 Metadata: DUP 32.00MiB 00:12:39.609 System: DUP 8.00MiB 00:12:39.609 SSD detected: yes 00:12:39.609 Zoned device: no 00:12:39.609 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:39.609 Checksum: crc32c 00:12:39.609 Number of devices: 1 00:12:39.609 Devices: 00:12:39.609 ID SIZE PATH 00:12:39.609 1 510.00MiB /dev/nvme0n1p1 00:12:39.609 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:39.609 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 251966 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:40.175 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:40.175 00:12:40.175 real 0m0.533s 00:12:40.175 user 0m0.029s 00:12:40.175 sys 0m0.151s 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:40.175 ************************************ 00:12:40.175 END TEST filesystem_btrfs 00:12:40.175 ************************************ 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.175 ************************************ 00:12:40.175 START TEST filesystem_xfs 00:12:40.175 ************************************ 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:40.175 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:40.175 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:40.175 = sectsz=512 attr=2, projid32bit=1 00:12:40.175 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:40.175 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:40.175 data = bsize=4096 blocks=130560, imaxpct=25 00:12:40.175 = sunit=0 swidth=0 blks 00:12:40.175 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:40.175 log =internal log bsize=4096 blocks=16384, version=2 00:12:40.175 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:40.175 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:41.107 Discarding blocks...Done. 00:12:41.107 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:41.107 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 251966 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:44.384 00:12:44.384 real 0m4.146s 00:12:44.384 user 0m0.031s 00:12:44.384 sys 0m0.112s 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:44.384 ************************************ 00:12:44.384 END TEST filesystem_xfs 00:12:44.384 ************************************ 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 251966 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 251966 ']' 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 251966 00:12:44.384 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:44.385 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.385 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 251966 00:12:44.642 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.642 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.642 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 251966' 00:12:44.642 killing process with pid 251966 00:12:44.642 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 251966 00:12:44.642 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 251966 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:44.901 00:12:44.901 real 0m17.535s 00:12:44.901 user 1m8.896s 00:12:44.901 sys 0m1.492s 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.901 ************************************ 00:12:44.901 END TEST nvmf_filesystem_no_in_capsule 00:12:44.901 ************************************ 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:44.901 ************************************ 00:12:44.901 START TEST nvmf_filesystem_in_capsule 00:12:44.901 ************************************ 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=255895 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 255895 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 255895 ']' 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.901 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.901 [2024-12-16 12:34:10.908301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:44.901 [2024-12-16 12:34:10.908348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.159 [2024-12-16 12:34:10.979996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.159 [2024-12-16 12:34:11.016507] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.159 [2024-12-16 12:34:11.016547] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.159 [2024-12-16 12:34:11.016554] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.159 [2024-12-16 12:34:11.016564] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.159 [2024-12-16 12:34:11.016585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.159 [2024-12-16 12:34:11.016645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.159 [2024-12-16 12:34:11.016750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.159 [2024-12-16 12:34:11.016858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.159 [2024-12-16 12:34:11.016860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.159 [2024-12-16 12:34:11.162995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.159 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.417 Malloc1 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 [2024-12-16 12:34:11.309288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:45.418 { 00:12:45.418 "name": "Malloc1", 00:12:45.418 "aliases": [ 00:12:45.418 "867a8494-37bc-4c71-bb76-c2fca2ed1424" 00:12:45.418 ], 00:12:45.418 "product_name": "Malloc disk", 00:12:45.418 "block_size": 512, 00:12:45.418 "num_blocks": 1048576, 00:12:45.418 "uuid": "867a8494-37bc-4c71-bb76-c2fca2ed1424", 00:12:45.418 "assigned_rate_limits": { 00:12:45.418 "rw_ios_per_sec": 0, 00:12:45.418 "rw_mbytes_per_sec": 0, 00:12:45.418 "r_mbytes_per_sec": 0, 00:12:45.418 "w_mbytes_per_sec": 0 00:12:45.418 }, 00:12:45.418 "claimed": true, 00:12:45.418 "claim_type": "exclusive_write", 00:12:45.418 "zoned": false, 00:12:45.418 "supported_io_types": { 00:12:45.418 "read": true, 00:12:45.418 "write": true, 00:12:45.418 "unmap": true, 00:12:45.418 "flush": true, 00:12:45.418 "reset": true, 00:12:45.418 "nvme_admin": false, 00:12:45.418 "nvme_io": false, 00:12:45.418 "nvme_io_md": false, 00:12:45.418 "write_zeroes": true, 00:12:45.418 "zcopy": true, 00:12:45.418 "get_zone_info": false, 00:12:45.418 "zone_management": false, 00:12:45.418 "zone_append": false, 00:12:45.418 "compare": false, 00:12:45.418 "compare_and_write": false, 00:12:45.418 "abort": true, 00:12:45.418 "seek_hole": false, 00:12:45.418 "seek_data": false, 00:12:45.418 "copy": true, 00:12:45.418 "nvme_iov_md": false 00:12:45.418 }, 00:12:45.418 "memory_domains": [ 00:12:45.418 { 00:12:45.418 "dma_device_id": "system", 00:12:45.418 "dma_device_type": 1 00:12:45.418 }, 00:12:45.418 { 00:12:45.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.418 "dma_device_type": 2 00:12:45.418 } 00:12:45.418 ], 00:12:45.418 "driver_specific": {} 00:12:45.418 } 00:12:45.418 ]' 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:45.418 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.790 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.790 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.790 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.790 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.790 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:48.686 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:48.943 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:49.878 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.810 ************************************ 00:12:50.810 START TEST filesystem_in_capsule_ext4 00:12:50.810 ************************************ 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:50.810 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:50.810 mke2fs 1.47.0 (5-Feb-2023) 00:12:50.810 Discarding device blocks: 0/522240 done 00:12:50.810 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:50.810 Filesystem UUID: 07efa3fb-07f2-488f-9d0f-25d4c62d65c1 00:12:50.810 Superblock backups stored on blocks: 00:12:50.810 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:50.810 00:12:50.810 Allocating group tables: 0/64 done 00:12:50.810 Writing inode tables: 0/64 done 00:12:51.067 Creating journal (8192 blocks): done 00:12:52.563 Writing superblocks and filesystem accounting information: 0/64 done 00:12:52.563 00:12:52.563 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:52.563 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 255895 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.821 00:12:57.821 real 0m7.137s 00:12:57.821 user 0m0.022s 00:12:57.821 sys 0m0.077s 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.821 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:57.821 ************************************ 00:12:57.821 END TEST filesystem_in_capsule_ext4 00:12:57.821 ************************************ 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.079 ************************************ 00:12:58.079 START TEST filesystem_in_capsule_btrfs 00:12:58.079 ************************************ 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:58.079 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:58.079 btrfs-progs v6.8.1 00:12:58.079 See https://btrfs.readthedocs.io for more information. 00:12:58.079 00:12:58.079 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:58.079 NOTE: several default settings have changed in version 5.15, please make sure 00:12:58.079 this does not affect your deployments: 00:12:58.079 - DUP for metadata (-m dup) 00:12:58.079 - enabled no-holes (-O no-holes) 00:12:58.079 - enabled free-space-tree (-R free-space-tree) 00:12:58.079 00:12:58.079 Label: (null) 00:12:58.079 UUID: b27dbca8-0992-4b49-a9dc-0af1496564f6 00:12:58.079 Node size: 16384 00:12:58.079 Sector size: 4096 (CPU page size: 4096) 00:12:58.079 Filesystem size: 510.00MiB 00:12:58.079 Block group profiles: 00:12:58.079 Data: single 8.00MiB 00:12:58.079 Metadata: DUP 32.00MiB 00:12:58.079 System: DUP 8.00MiB 00:12:58.079 SSD detected: yes 00:12:58.079 Zoned device: no 00:12:58.079 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:58.079 Checksum: crc32c 00:12:58.079 Number of devices: 1 00:12:58.079 Devices: 00:12:58.079 ID SIZE PATH 00:12:58.079 1 510.00MiB /dev/nvme0n1p1 00:12:58.079 00:12:58.079 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:58.079 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 255895 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:58.337 00:12:58.337 real 0m0.409s 00:12:58.337 user 0m0.017s 00:12:58.337 sys 0m0.118s 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:58.337 ************************************ 00:12:58.337 END TEST filesystem_in_capsule_btrfs 00:12:58.337 ************************************ 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.337 ************************************ 00:12:58.337 START TEST filesystem_in_capsule_xfs 00:12:58.337 ************************************ 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:58.337 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:58.595 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:58.595 = sectsz=512 attr=2, projid32bit=1 00:12:58.595 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:58.595 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:58.595 data = bsize=4096 blocks=130560, imaxpct=25 00:12:58.595 = sunit=0 swidth=0 blks 00:12:58.595 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:58.595 log =internal log bsize=4096 blocks=16384, version=2 00:12:58.595 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:58.595 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:59.525 Discarding blocks...Done. 00:12:59.525 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:59.525 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 255895 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:02.049 00:13:02.049 real 0m3.516s 00:13:02.049 user 0m0.020s 00:13:02.049 sys 0m0.077s 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:02.049 ************************************ 00:13:02.049 END TEST filesystem_in_capsule_xfs 00:13:02.049 ************************************ 00:13:02.049 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.307 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 255895 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 255895 ']' 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 255895 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:02.308 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 255895 00:13:02.565 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:02.565 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:02.565 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 255895' 00:13:02.565 killing process with pid 255895 00:13:02.565 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 255895 00:13:02.565 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 255895 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:02.824 00:13:02.824 real 0m17.902s 00:13:02.824 user 1m10.362s 00:13:02.824 sys 0m1.401s 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.824 ************************************ 00:13:02.824 END TEST nvmf_filesystem_in_capsule 00:13:02.824 ************************************ 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.824 rmmod nvme_tcp 00:13:02.824 rmmod nvme_fabrics 00:13:02.824 rmmod nvme_keyring 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.824 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.825 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.825 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.825 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:05.366 00:13:05.366 real 0m44.076s 00:13:05.366 user 2m21.323s 00:13:05.366 sys 0m7.474s 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:05.366 ************************************ 00:13:05.366 END TEST nvmf_filesystem 00:13:05.366 ************************************ 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.366 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:05.366 ************************************ 00:13:05.366 START TEST nvmf_target_discovery 00:13:05.366 ************************************ 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:05.366 * Looking for test storage... 00:13:05.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.366 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:05.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.367 --rc genhtml_branch_coverage=1 00:13:05.367 --rc genhtml_function_coverage=1 00:13:05.367 --rc genhtml_legend=1 00:13:05.367 --rc geninfo_all_blocks=1 00:13:05.367 --rc geninfo_unexecuted_blocks=1 00:13:05.367 00:13:05.367 ' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:05.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.367 --rc genhtml_branch_coverage=1 00:13:05.367 --rc genhtml_function_coverage=1 00:13:05.367 --rc genhtml_legend=1 00:13:05.367 --rc geninfo_all_blocks=1 00:13:05.367 --rc geninfo_unexecuted_blocks=1 00:13:05.367 00:13:05.367 ' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:05.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.367 --rc genhtml_branch_coverage=1 00:13:05.367 --rc genhtml_function_coverage=1 00:13:05.367 --rc genhtml_legend=1 00:13:05.367 --rc geninfo_all_blocks=1 00:13:05.367 --rc geninfo_unexecuted_blocks=1 00:13:05.367 00:13:05.367 ' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:05.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.367 --rc genhtml_branch_coverage=1 00:13:05.367 --rc genhtml_function_coverage=1 00:13:05.367 --rc genhtml_legend=1 00:13:05.367 --rc geninfo_all_blocks=1 00:13:05.367 --rc geninfo_unexecuted_blocks=1 00:13:05.367 00:13:05.367 ' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:05.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:05.367 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.368 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.368 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.368 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:05.368 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:05.368 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:05.368 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:11.947 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:11.947 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:11.947 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:11.948 Found net devices under 0000:af:00.0: cvl_0_0 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:11.948 Found net devices under 0000:af:00.1: cvl_0_1 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.948 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:13:11.948 00:13:11.948 --- 10.0.0.2 ping statistics --- 00:13:11.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.948 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:13:11.948 00:13:11.948 --- 10.0.0.1 ping statistics --- 00:13:11.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.948 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=262419 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 262419 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 262419 ']' 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.948 [2024-12-16 12:34:37.200998] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:11.948 [2024-12-16 12:34:37.201044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.948 [2024-12-16 12:34:37.273357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.948 [2024-12-16 12:34:37.312512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.948 [2024-12-16 12:34:37.312553] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.948 [2024-12-16 12:34:37.312560] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.948 [2024-12-16 12:34:37.312566] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.948 [2024-12-16 12:34:37.312571] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.948 [2024-12-16 12:34:37.312651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.948 [2024-12-16 12:34:37.312788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.948 [2024-12-16 12:34:37.312895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.948 [2024-12-16 12:34:37.312897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.948 [2024-12-16 12:34:37.465013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:11.948 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 Null1 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 [2024-12-16 12:34:37.522511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 Null2 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 Null3 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 Null4 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:11.949 00:13:11.949 Discovery Log Number of Records 6, Generation counter 6 00:13:11.949 =====Discovery Log Entry 0====== 00:13:11.949 trtype: tcp 00:13:11.949 adrfam: ipv4 00:13:11.949 subtype: current discovery subsystem 00:13:11.949 treq: not required 00:13:11.949 portid: 0 00:13:11.949 trsvcid: 4420 00:13:11.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:11.949 traddr: 10.0.0.2 00:13:11.949 eflags: explicit discovery connections, duplicate discovery information 00:13:11.949 sectype: none 00:13:11.949 =====Discovery Log Entry 1====== 00:13:11.949 trtype: tcp 00:13:11.949 adrfam: ipv4 00:13:11.949 subtype: nvme subsystem 00:13:11.949 treq: not required 00:13:11.949 portid: 0 00:13:11.949 trsvcid: 4420 00:13:11.949 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:11.949 traddr: 10.0.0.2 00:13:11.949 eflags: none 00:13:11.949 sectype: none 00:13:11.949 =====Discovery Log Entry 2====== 00:13:11.949 trtype: tcp 00:13:11.949 adrfam: ipv4 00:13:11.949 subtype: nvme subsystem 00:13:11.949 treq: not required 00:13:11.949 portid: 0 00:13:11.949 trsvcid: 4420 00:13:11.949 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:11.949 traddr: 10.0.0.2 00:13:11.949 eflags: none 00:13:11.949 sectype: none 00:13:11.949 =====Discovery Log Entry 3====== 00:13:11.949 trtype: tcp 00:13:11.949 adrfam: ipv4 00:13:11.949 subtype: nvme subsystem 00:13:11.949 treq: not required 00:13:11.949 portid: 0 00:13:11.949 trsvcid: 4420 00:13:11.949 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:11.949 traddr: 10.0.0.2 00:13:11.949 eflags: none 00:13:11.949 sectype: none 00:13:11.949 =====Discovery Log Entry 4====== 00:13:11.949 trtype: tcp 00:13:11.949 adrfam: ipv4 00:13:11.949 subtype: nvme subsystem 00:13:11.949 treq: not required 00:13:11.949 portid: 0 00:13:11.949 trsvcid: 4420 00:13:11.949 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:11.949 traddr: 10.0.0.2 00:13:11.949 eflags: none 00:13:11.949 sectype: none 00:13:11.949 =====Discovery Log Entry 5====== 00:13:11.949 trtype: tcp 00:13:11.949 adrfam: ipv4 00:13:11.949 subtype: discovery subsystem referral 00:13:11.949 treq: not required 00:13:11.949 portid: 0 00:13:11.949 trsvcid: 4430 00:13:11.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:11.949 traddr: 10.0.0.2 00:13:11.949 eflags: none 00:13:11.949 sectype: none 00:13:11.949 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:11.950 Perform nvmf subsystem discovery via RPC 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 [ 00:13:11.950 { 00:13:11.950 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:11.950 "subtype": "Discovery", 00:13:11.950 "listen_addresses": [ 00:13:11.950 { 00:13:11.950 "trtype": "TCP", 00:13:11.950 "adrfam": "IPv4", 00:13:11.950 "traddr": "10.0.0.2", 00:13:11.950 "trsvcid": "4420" 00:13:11.950 } 00:13:11.950 ], 00:13:11.950 "allow_any_host": true, 00:13:11.950 "hosts": [] 00:13:11.950 }, 00:13:11.950 { 00:13:11.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.950 "subtype": "NVMe", 00:13:11.950 "listen_addresses": [ 00:13:11.950 { 00:13:11.950 "trtype": "TCP", 00:13:11.950 "adrfam": "IPv4", 00:13:11.950 "traddr": "10.0.0.2", 00:13:11.950 "trsvcid": "4420" 00:13:11.950 } 00:13:11.950 ], 00:13:11.950 "allow_any_host": true, 00:13:11.950 "hosts": [], 00:13:11.950 "serial_number": "SPDK00000000000001", 00:13:11.950 "model_number": "SPDK bdev Controller", 00:13:11.950 "max_namespaces": 32, 00:13:11.950 "min_cntlid": 1, 00:13:11.950 "max_cntlid": 65519, 00:13:11.950 "namespaces": [ 00:13:11.950 { 00:13:11.950 "nsid": 1, 00:13:11.950 "bdev_name": "Null1", 00:13:11.950 "name": "Null1", 00:13:11.950 "nguid": "09AD98EA169546C7A013D4F211B680C2", 00:13:11.950 "uuid": "09ad98ea-1695-46c7-a013-d4f211b680c2" 00:13:11.950 } 00:13:11.950 ] 00:13:11.950 }, 00:13:11.950 { 00:13:11.950 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:11.950 "subtype": "NVMe", 00:13:11.950 "listen_addresses": [ 00:13:11.950 { 00:13:11.950 "trtype": "TCP", 00:13:11.950 "adrfam": "IPv4", 00:13:11.950 "traddr": "10.0.0.2", 00:13:11.950 "trsvcid": "4420" 00:13:11.950 } 00:13:11.950 ], 00:13:11.950 "allow_any_host": true, 00:13:11.950 "hosts": [], 00:13:11.950 "serial_number": "SPDK00000000000002", 00:13:11.950 "model_number": "SPDK bdev Controller", 00:13:11.950 "max_namespaces": 32, 00:13:11.950 "min_cntlid": 1, 00:13:11.950 "max_cntlid": 65519, 00:13:11.950 "namespaces": [ 00:13:11.950 { 00:13:11.950 "nsid": 1, 00:13:11.950 "bdev_name": "Null2", 00:13:11.950 "name": "Null2", 00:13:11.950 "nguid": "8E222DED0341472BAB1465666857C3BD", 00:13:11.950 "uuid": "8e222ded-0341-472b-ab14-65666857c3bd" 00:13:11.950 } 00:13:11.950 ] 00:13:11.950 }, 00:13:11.950 { 00:13:11.950 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:11.950 "subtype": "NVMe", 00:13:11.950 "listen_addresses": [ 00:13:11.950 { 00:13:11.950 "trtype": "TCP", 00:13:11.950 "adrfam": "IPv4", 00:13:11.950 "traddr": "10.0.0.2", 00:13:11.950 "trsvcid": "4420" 00:13:11.950 } 00:13:11.950 ], 00:13:11.950 "allow_any_host": true, 00:13:11.950 "hosts": [], 00:13:11.950 "serial_number": "SPDK00000000000003", 00:13:11.950 "model_number": "SPDK bdev Controller", 00:13:11.950 "max_namespaces": 32, 00:13:11.950 "min_cntlid": 1, 00:13:11.950 "max_cntlid": 65519, 00:13:11.950 "namespaces": [ 00:13:11.950 { 00:13:11.950 "nsid": 1, 00:13:11.950 "bdev_name": "Null3", 00:13:11.950 "name": "Null3", 00:13:11.950 "nguid": "11809D5DBD2C4CD6B02C4F3951A97B64", 00:13:11.950 "uuid": "11809d5d-bd2c-4cd6-b02c-4f3951a97b64" 00:13:11.950 } 00:13:11.950 ] 00:13:11.950 }, 00:13:11.950 { 00:13:11.950 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:11.950 "subtype": "NVMe", 00:13:11.950 "listen_addresses": [ 00:13:11.950 { 00:13:11.950 "trtype": "TCP", 00:13:11.950 "adrfam": "IPv4", 00:13:11.950 "traddr": "10.0.0.2", 00:13:11.950 "trsvcid": "4420" 00:13:11.950 } 00:13:11.950 ], 00:13:11.950 "allow_any_host": true, 00:13:11.950 "hosts": [], 00:13:11.950 "serial_number": "SPDK00000000000004", 00:13:11.950 "model_number": "SPDK bdev Controller", 00:13:11.950 "max_namespaces": 32, 00:13:11.950 "min_cntlid": 1, 00:13:11.950 "max_cntlid": 65519, 00:13:11.950 "namespaces": [ 00:13:11.950 { 00:13:11.950 "nsid": 1, 00:13:11.950 "bdev_name": "Null4", 00:13:11.950 "name": "Null4", 00:13:11.950 "nguid": "2D3C672C4A0C4CB8BEE1E6A6F2C9DEE3", 00:13:11.950 "uuid": "2d3c672c-4a0c-4cb8-bee1-e6a6f2c9dee3" 00:13:11.950 } 00:13:11.950 ] 00:13:11.950 } 00:13:11.950 ] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.210 rmmod nvme_tcp 00:13:12.210 rmmod nvme_fabrics 00:13:12.210 rmmod nvme_keyring 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 262419 ']' 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 262419 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 262419 ']' 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 262419 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 262419 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 262419' 00:13:12.210 killing process with pid 262419 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 262419 00:13:12.210 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 262419 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.470 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.379 00:13:14.379 real 0m9.389s 00:13:14.379 user 0m5.774s 00:13:14.379 sys 0m4.800s 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:14.379 ************************************ 00:13:14.379 END TEST nvmf_target_discovery 00:13:14.379 ************************************ 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.379 ************************************ 00:13:14.379 START TEST nvmf_referrals 00:13:14.379 ************************************ 00:13:14.379 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:14.640 * Looking for test storage... 00:13:14.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:14.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.640 --rc genhtml_branch_coverage=1 00:13:14.640 --rc genhtml_function_coverage=1 00:13:14.640 --rc genhtml_legend=1 00:13:14.640 --rc geninfo_all_blocks=1 00:13:14.640 --rc geninfo_unexecuted_blocks=1 00:13:14.640 00:13:14.640 ' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:14.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.640 --rc genhtml_branch_coverage=1 00:13:14.640 --rc genhtml_function_coverage=1 00:13:14.640 --rc genhtml_legend=1 00:13:14.640 --rc geninfo_all_blocks=1 00:13:14.640 --rc geninfo_unexecuted_blocks=1 00:13:14.640 00:13:14.640 ' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:14.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.640 --rc genhtml_branch_coverage=1 00:13:14.640 --rc genhtml_function_coverage=1 00:13:14.640 --rc genhtml_legend=1 00:13:14.640 --rc geninfo_all_blocks=1 00:13:14.640 --rc geninfo_unexecuted_blocks=1 00:13:14.640 00:13:14.640 ' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:14.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.640 --rc genhtml_branch_coverage=1 00:13:14.640 --rc genhtml_function_coverage=1 00:13:14.640 --rc genhtml_legend=1 00:13:14.640 --rc geninfo_all_blocks=1 00:13:14.640 --rc geninfo_unexecuted_blocks=1 00:13:14.640 00:13:14.640 ' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.640 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.641 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:21.217 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:21.217 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:21.218 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:21.218 Found net devices under 0000:af:00.0: cvl_0_0 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:21.218 Found net devices under 0000:af:00.1: cvl_0_1 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:13:21.218 00:13:21.218 --- 10.0.0.2 ping statistics --- 00:13:21.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.218 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:13:21.218 00:13:21.218 --- 10.0.0.1 ping statistics --- 00:13:21.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.218 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=265978 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 265978 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 265978 ']' 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.218 [2024-12-16 12:34:46.762268] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:21.218 [2024-12-16 12:34:46.762320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.218 [2024-12-16 12:34:46.837961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.218 [2024-12-16 12:34:46.879532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.218 [2024-12-16 12:34:46.879569] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.218 [2024-12-16 12:34:46.879576] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.218 [2024-12-16 12:34:46.879583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.218 [2024-12-16 12:34:46.879590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.218 [2024-12-16 12:34:46.879649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.218 [2024-12-16 12:34:46.879757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.218 [2024-12-16 12:34:46.879845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.218 [2024-12-16 12:34:46.879847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:21.218 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:21.219 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.219 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 [2024-12-16 12:34:47.041161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 [2024-12-16 12:34:47.054541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:21.219 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:21.476 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:21.477 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.734 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:21.735 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:21.992 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.250 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:22.507 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.507 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:22.507 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:22.507 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:22.508 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:22.765 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:22.765 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:22.765 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:22.765 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:22.765 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:22.765 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:23.022 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.280 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.281 rmmod nvme_tcp 00:13:23.281 rmmod nvme_fabrics 00:13:23.281 rmmod nvme_keyring 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 265978 ']' 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 265978 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 265978 ']' 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 265978 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 265978 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 265978' 00:13:23.281 killing process with pid 265978 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 265978 00:13:23.281 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 265978 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.540 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.082 00:13:26.082 real 0m11.143s 00:13:26.082 user 0m12.901s 00:13:26.082 sys 0m5.203s 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:26.082 ************************************ 00:13:26.082 END TEST nvmf_referrals 00:13:26.082 ************************************ 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.082 ************************************ 00:13:26.082 START TEST nvmf_connect_disconnect 00:13:26.082 ************************************ 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:26.082 * Looking for test storage... 00:13:26.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:26.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.082 --rc genhtml_branch_coverage=1 00:13:26.082 --rc genhtml_function_coverage=1 00:13:26.082 --rc genhtml_legend=1 00:13:26.082 --rc geninfo_all_blocks=1 00:13:26.082 --rc geninfo_unexecuted_blocks=1 00:13:26.082 00:13:26.082 ' 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:26.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.082 --rc genhtml_branch_coverage=1 00:13:26.082 --rc genhtml_function_coverage=1 00:13:26.082 --rc genhtml_legend=1 00:13:26.082 --rc geninfo_all_blocks=1 00:13:26.082 --rc geninfo_unexecuted_blocks=1 00:13:26.082 00:13:26.082 ' 00:13:26.082 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:26.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.082 --rc genhtml_branch_coverage=1 00:13:26.082 --rc genhtml_function_coverage=1 00:13:26.083 --rc genhtml_legend=1 00:13:26.083 --rc geninfo_all_blocks=1 00:13:26.083 --rc geninfo_unexecuted_blocks=1 00:13:26.083 00:13:26.083 ' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:26.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.083 --rc genhtml_branch_coverage=1 00:13:26.083 --rc genhtml_function_coverage=1 00:13:26.083 --rc genhtml_legend=1 00:13:26.083 --rc geninfo_all_blocks=1 00:13:26.083 --rc geninfo_unexecuted_blocks=1 00:13:26.083 00:13:26.083 ' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.083 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.668 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:32.669 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:32.669 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:32.669 Found net devices under 0000:af:00.0: cvl_0_0 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:32.669 Found net devices under 0000:af:00.1: cvl_0_1 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:13:32.669 00:13:32.669 --- 10.0.0.2 ping statistics --- 00:13:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.669 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:13:32.669 00:13:32.669 --- 10.0.0.1 ping statistics --- 00:13:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.669 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=269997 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 269997 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 269997 ']' 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.669 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.669 [2024-12-16 12:34:57.807206] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:32.670 [2024-12-16 12:34:57.807248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.670 [2024-12-16 12:34:57.882086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.670 [2024-12-16 12:34:57.921795] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.670 [2024-12-16 12:34:57.921836] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.670 [2024-12-16 12:34:57.921843] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.670 [2024-12-16 12:34:57.921849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.670 [2024-12-16 12:34:57.921855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.670 [2024-12-16 12:34:57.921935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.670 [2024-12-16 12:34:57.922043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.670 [2024-12-16 12:34:57.922045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.670 [2024-12-16 12:34:57.921953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 [2024-12-16 12:34:58.065675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 [2024-12-16 12:34:58.116876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:32.670 12:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:34.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.793 [2024-12-16 12:36:56.328111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6f3d0 is same with the state(6) to be set 00:15:30.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.059 [2024-12-16 12:37:33.626091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e3c0 is same with the state(6) to be set 00:16:08.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.349 [2024-12-16 12:37:51.989095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e3c0 is same with the state(6) to be set 00:16:26.349 [2024-12-16 12:37:51.989132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e3c0 is same with the state(6) to be set 00:16:26.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.448 rmmod nvme_tcp 00:17:24.448 rmmod nvme_fabrics 00:17:24.448 rmmod nvme_keyring 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 269997 ']' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 269997 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 269997 ']' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 269997 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 269997 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 269997' 00:17:24.448 killing process with pid 269997 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 269997 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 269997 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.448 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.356 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:26.356 00:17:26.356 real 4m0.800s 00:17:26.357 user 15m19.950s 00:17:26.357 sys 0m25.106s 00:17:26.357 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:26.357 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:26.357 ************************************ 00:17:26.357 END TEST nvmf_connect_disconnect 00:17:26.357 ************************************ 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.617 ************************************ 00:17:26.617 START TEST nvmf_multitarget 00:17:26.617 ************************************ 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:26.617 * Looking for test storage... 00:17:26.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.617 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:26.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.617 --rc genhtml_branch_coverage=1 00:17:26.617 --rc genhtml_function_coverage=1 00:17:26.617 --rc genhtml_legend=1 00:17:26.618 --rc geninfo_all_blocks=1 00:17:26.618 --rc geninfo_unexecuted_blocks=1 00:17:26.618 00:17:26.618 ' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.618 --rc genhtml_branch_coverage=1 00:17:26.618 --rc genhtml_function_coverage=1 00:17:26.618 --rc genhtml_legend=1 00:17:26.618 --rc geninfo_all_blocks=1 00:17:26.618 --rc geninfo_unexecuted_blocks=1 00:17:26.618 00:17:26.618 ' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.618 --rc genhtml_branch_coverage=1 00:17:26.618 --rc genhtml_function_coverage=1 00:17:26.618 --rc genhtml_legend=1 00:17:26.618 --rc geninfo_all_blocks=1 00:17:26.618 --rc geninfo_unexecuted_blocks=1 00:17:26.618 00:17:26.618 ' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.618 --rc genhtml_branch_coverage=1 00:17:26.618 --rc genhtml_function_coverage=1 00:17:26.618 --rc genhtml_legend=1 00:17:26.618 --rc geninfo_all_blocks=1 00:17:26.618 --rc geninfo_unexecuted_blocks=1 00:17:26.618 00:17:26.618 ' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.618 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:33.190 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:33.190 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:33.190 Found net devices under 0000:af:00.0: cvl_0_0 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:33.190 Found net devices under 0000:af:00.1: cvl_0_1 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.190 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:17:33.191 00:17:33.191 --- 10.0.0.2 ping statistics --- 00:17:33.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.191 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:17:33.191 00:17:33.191 --- 10.0.0.1 ping statistics --- 00:17:33.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.191 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=312691 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 312691 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 312691 ']' 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.191 [2024-12-16 12:38:58.626352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:33.191 [2024-12-16 12:38:58.626395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.191 [2024-12-16 12:38:58.697372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.191 [2024-12-16 12:38:58.737659] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.191 [2024-12-16 12:38:58.737696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.191 [2024-12-16 12:38:58.737704] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.191 [2024-12-16 12:38:58.737709] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.191 [2024-12-16 12:38:58.737714] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.191 [2024-12-16 12:38:58.741135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.191 [2024-12-16 12:38:58.741169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.191 [2024-12-16 12:38:58.741289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.191 [2024-12-16 12:38:58.741290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:33.191 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:33.191 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:33.191 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:33.191 "nvmf_tgt_1" 00:17:33.191 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:33.191 "nvmf_tgt_2" 00:17:33.191 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:33.191 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:33.448 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:33.448 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:33.448 true 00:17:33.449 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:33.706 true 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.706 rmmod nvme_tcp 00:17:33.706 rmmod nvme_fabrics 00:17:33.706 rmmod nvme_keyring 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 312691 ']' 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 312691 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 312691 ']' 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 312691 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.706 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 312691 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 312691' 00:17:33.965 killing process with pid 312691 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 312691 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 312691 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.965 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.501 00:17:36.501 real 0m9.566s 00:17:36.501 user 0m7.320s 00:17:36.501 sys 0m4.764s 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 ************************************ 00:17:36.501 END TEST nvmf_multitarget 00:17:36.501 ************************************ 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 ************************************ 00:17:36.501 START TEST nvmf_rpc 00:17:36.501 ************************************ 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:36.501 * Looking for test storage... 00:17:36.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:36.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.501 --rc genhtml_branch_coverage=1 00:17:36.501 --rc genhtml_function_coverage=1 00:17:36.501 --rc genhtml_legend=1 00:17:36.501 --rc geninfo_all_blocks=1 00:17:36.501 --rc geninfo_unexecuted_blocks=1 00:17:36.501 00:17:36.501 ' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:36.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.501 --rc genhtml_branch_coverage=1 00:17:36.501 --rc genhtml_function_coverage=1 00:17:36.501 --rc genhtml_legend=1 00:17:36.501 --rc geninfo_all_blocks=1 00:17:36.501 --rc geninfo_unexecuted_blocks=1 00:17:36.501 00:17:36.501 ' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:36.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.501 --rc genhtml_branch_coverage=1 00:17:36.501 --rc genhtml_function_coverage=1 00:17:36.501 --rc genhtml_legend=1 00:17:36.501 --rc geninfo_all_blocks=1 00:17:36.501 --rc geninfo_unexecuted_blocks=1 00:17:36.501 00:17:36.501 ' 00:17:36.501 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:36.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.501 --rc genhtml_branch_coverage=1 00:17:36.502 --rc genhtml_function_coverage=1 00:17:36.502 --rc genhtml_legend=1 00:17:36.502 --rc geninfo_all_blocks=1 00:17:36.502 --rc geninfo_unexecuted_blocks=1 00:17:36.502 00:17:36.502 ' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.502 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:43.075 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:43.075 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:43.075 Found net devices under 0000:af:00.0: cvl_0_0 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:43.075 Found net devices under 0000:af:00.1: cvl_0_1 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.075 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:17:43.075 00:17:43.075 --- 10.0.0.2 ping statistics --- 00:17:43.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.075 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:43.075 00:17:43.075 --- 10.0.0.1 ping statistics --- 00:17:43.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.075 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=316924 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 316924 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 316924 ']' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.076 [2024-12-16 12:39:08.228429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:43.076 [2024-12-16 12:39:08.228470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.076 [2024-12-16 12:39:08.300515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.076 [2024-12-16 12:39:08.341208] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.076 [2024-12-16 12:39:08.341246] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.076 [2024-12-16 12:39:08.341253] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.076 [2024-12-16 12:39:08.341258] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.076 [2024-12-16 12:39:08.341263] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.076 [2024-12-16 12:39:08.341308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.076 [2024-12-16 12:39:08.341419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.076 [2024-12-16 12:39:08.341523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.076 [2024-12-16 12:39:08.341525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:43.076 "tick_rate": 2100000000, 00:17:43.076 "poll_groups": [ 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_000", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [] 00:17:43.076 }, 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_001", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [] 00:17:43.076 }, 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_002", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [] 00:17:43.076 }, 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_003", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [] 00:17:43.076 } 00:17:43.076 ] 00:17:43.076 }' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.076 [2024-12-16 12:39:08.590332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:43.076 "tick_rate": 2100000000, 00:17:43.076 "poll_groups": [ 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_000", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [ 00:17:43.076 { 00:17:43.076 "trtype": "TCP" 00:17:43.076 } 00:17:43.076 ] 00:17:43.076 }, 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_001", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [ 00:17:43.076 { 00:17:43.076 "trtype": "TCP" 00:17:43.076 } 00:17:43.076 ] 00:17:43.076 }, 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_002", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [ 00:17:43.076 { 00:17:43.076 "trtype": "TCP" 00:17:43.076 } 00:17:43.076 ] 00:17:43.076 }, 00:17:43.076 { 00:17:43.076 "name": "nvmf_tgt_poll_group_003", 00:17:43.076 "admin_qpairs": 0, 00:17:43.076 "io_qpairs": 0, 00:17:43.076 "current_admin_qpairs": 0, 00:17:43.076 "current_io_qpairs": 0, 00:17:43.076 "pending_bdev_io": 0, 00:17:43.076 "completed_nvme_io": 0, 00:17:43.076 "transports": [ 00:17:43.076 { 00:17:43.076 "trtype": "TCP" 00:17:43.076 } 00:17:43.076 ] 00:17:43.076 } 00:17:43.076 ] 00:17:43.076 }' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:43.076 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 Malloc1 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 [2024-12-16 12:39:08.757959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:43.077 [2024-12-16 12:39:08.786409] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:17:43.077 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:43.077 could not add new controller: failed to write to nvme-fabrics device 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.077 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.009 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:44.009 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:44.009 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.009 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:44.009 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:45.905 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:46.163 [2024-12-16 12:39:12.092208] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:17:46.163 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:46.163 could not add new controller: failed to write to nvme-fabrics device 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.163 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:47.535 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:47.535 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:47.535 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.535 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:47.535 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.432 [2024-12-16 12:39:15.416605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.432 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.803 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:50.803 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:50.803 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.803 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:50.803 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:52.699 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.700 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.957 [2024-12-16 12:39:18.787538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.957 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:53.889 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:53.889 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:53.889 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.889 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:53.889 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:56.414 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.414 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.415 [2024-12-16 12:39:22.102251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.415 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:57.347 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:57.347 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:57.347 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.347 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:57.347 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:59.242 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:59.242 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:59.242 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:59.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.500 [2024-12-16 12:39:25.454914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.500 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.501 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:00.871 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:00.871 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:00.871 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.871 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:00.871 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.791 [2024-12-16 12:39:28.731280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.791 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:04.162 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:04.162 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:04.162 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.162 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:04.162 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:06.059 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 [2024-12-16 12:39:32.069982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 [2024-12-16 12:39:32.118088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 [2024-12-16 12:39:32.166226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 [2024-12-16 12:39:32.214381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 [2024-12-16 12:39:32.262546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.318 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:06.318 "tick_rate": 2100000000, 00:18:06.318 "poll_groups": [ 00:18:06.318 { 00:18:06.318 "name": "nvmf_tgt_poll_group_000", 00:18:06.318 "admin_qpairs": 2, 00:18:06.318 "io_qpairs": 168, 00:18:06.318 "current_admin_qpairs": 0, 00:18:06.318 "current_io_qpairs": 0, 00:18:06.318 "pending_bdev_io": 0, 00:18:06.318 "completed_nvme_io": 311, 00:18:06.318 "transports": [ 00:18:06.318 { 00:18:06.318 "trtype": "TCP" 00:18:06.318 } 00:18:06.318 ] 00:18:06.318 }, 00:18:06.318 { 00:18:06.318 "name": "nvmf_tgt_poll_group_001", 00:18:06.318 "admin_qpairs": 2, 00:18:06.318 "io_qpairs": 168, 00:18:06.319 "current_admin_qpairs": 0, 00:18:06.319 "current_io_qpairs": 0, 00:18:06.319 "pending_bdev_io": 0, 00:18:06.319 "completed_nvme_io": 268, 00:18:06.319 "transports": [ 00:18:06.319 { 00:18:06.319 "trtype": "TCP" 00:18:06.319 } 00:18:06.319 ] 00:18:06.319 }, 00:18:06.319 { 00:18:06.319 "name": "nvmf_tgt_poll_group_002", 00:18:06.319 "admin_qpairs": 1, 00:18:06.319 "io_qpairs": 168, 00:18:06.319 "current_admin_qpairs": 0, 00:18:06.319 "current_io_qpairs": 0, 00:18:06.319 "pending_bdev_io": 0, 00:18:06.319 "completed_nvme_io": 177, 00:18:06.319 "transports": [ 00:18:06.319 { 00:18:06.319 "trtype": "TCP" 00:18:06.319 } 00:18:06.319 ] 00:18:06.319 }, 00:18:06.319 { 00:18:06.319 "name": "nvmf_tgt_poll_group_003", 00:18:06.319 "admin_qpairs": 2, 00:18:06.319 "io_qpairs": 168, 00:18:06.319 "current_admin_qpairs": 0, 00:18:06.319 "current_io_qpairs": 0, 00:18:06.319 "pending_bdev_io": 0, 00:18:06.319 "completed_nvme_io": 266, 00:18:06.319 "transports": [ 00:18:06.319 { 00:18:06.319 "trtype": "TCP" 00:18:06.319 } 00:18:06.319 ] 00:18:06.319 } 00:18:06.319 ] 00:18:06.319 }' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:06.319 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.577 rmmod nvme_tcp 00:18:06.577 rmmod nvme_fabrics 00:18:06.577 rmmod nvme_keyring 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 316924 ']' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 316924 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 316924 ']' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 316924 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 316924 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 316924' 00:18:06.577 killing process with pid 316924 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 316924 00:18:06.577 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 316924 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.836 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.741 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.741 00:18:08.741 real 0m32.710s 00:18:08.741 user 1m38.706s 00:18:08.741 sys 0m6.453s 00:18:08.741 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:08.741 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.741 ************************************ 00:18:08.741 END TEST nvmf_rpc 00:18:08.741 ************************************ 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.003 ************************************ 00:18:09.003 START TEST nvmf_invalid 00:18:09.003 ************************************ 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:09.003 * Looking for test storage... 00:18:09.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.003 --rc genhtml_branch_coverage=1 00:18:09.003 --rc genhtml_function_coverage=1 00:18:09.003 --rc genhtml_legend=1 00:18:09.003 --rc geninfo_all_blocks=1 00:18:09.003 --rc geninfo_unexecuted_blocks=1 00:18:09.003 00:18:09.003 ' 00:18:09.003 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.003 --rc genhtml_branch_coverage=1 00:18:09.003 --rc genhtml_function_coverage=1 00:18:09.003 --rc genhtml_legend=1 00:18:09.003 --rc geninfo_all_blocks=1 00:18:09.003 --rc geninfo_unexecuted_blocks=1 00:18:09.003 00:18:09.004 ' 00:18:09.004 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.004 --rc genhtml_branch_coverage=1 00:18:09.004 --rc genhtml_function_coverage=1 00:18:09.004 --rc genhtml_legend=1 00:18:09.004 --rc geninfo_all_blocks=1 00:18:09.004 --rc geninfo_unexecuted_blocks=1 00:18:09.004 00:18:09.004 ' 00:18:09.004 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.004 --rc genhtml_branch_coverage=1 00:18:09.004 --rc genhtml_function_coverage=1 00:18:09.004 --rc genhtml_legend=1 00:18:09.004 --rc geninfo_all_blocks=1 00:18:09.004 --rc geninfo_unexecuted_blocks=1 00:18:09.004 00:18:09.004 ' 00:18:09.004 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:09.004 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:15.579 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:15.579 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:15.579 Found net devices under 0000:af:00.0: cvl_0_0 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:15.579 Found net devices under 0000:af:00.1: cvl_0_1 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:15.579 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:15.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:18:15.580 00:18:15.580 --- 10.0.0.2 ping statistics --- 00:18:15.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.580 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:18:15.580 00:18:15.580 --- 10.0.0.1 ping statistics --- 00:18:15.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.580 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=324347 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 324347 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 324347 ']' 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.580 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 [2024-12-16 12:39:40.998521] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:15.580 [2024-12-16 12:39:40.998564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.580 [2024-12-16 12:39:41.070648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.580 [2024-12-16 12:39:41.110860] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.580 [2024-12-16 12:39:41.110899] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.580 [2024-12-16 12:39:41.110906] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.580 [2024-12-16 12:39:41.110911] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.580 [2024-12-16 12:39:41.110917] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.580 [2024-12-16 12:39:41.110958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.580 [2024-12-16 12:39:41.111073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.580 [2024-12-16 12:39:41.111182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.580 [2024-12-16 12:39:41.111180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7950 00:18:15.580 [2024-12-16 12:39:41.426550] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:15.580 { 00:18:15.580 "nqn": "nqn.2016-06.io.spdk:cnode7950", 00:18:15.580 "tgt_name": "foobar", 00:18:15.580 "method": "nvmf_create_subsystem", 00:18:15.580 "req_id": 1 00:18:15.580 } 00:18:15.580 Got JSON-RPC error response 00:18:15.580 response: 00:18:15.580 { 00:18:15.580 "code": -32603, 00:18:15.580 "message": "Unable to find target foobar" 00:18:15.580 }' 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:15.580 { 00:18:15.580 "nqn": "nqn.2016-06.io.spdk:cnode7950", 00:18:15.580 "tgt_name": "foobar", 00:18:15.580 "method": "nvmf_create_subsystem", 00:18:15.580 "req_id": 1 00:18:15.580 } 00:18:15.580 Got JSON-RPC error response 00:18:15.580 response: 00:18:15.580 { 00:18:15.580 "code": -32603, 00:18:15.580 "message": "Unable to find target foobar" 00:18:15.580 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:15.580 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21914 00:18:15.580 [2024-12-16 12:39:41.643273] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21914: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:15.838 { 00:18:15.838 "nqn": "nqn.2016-06.io.spdk:cnode21914", 00:18:15.838 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:15.838 "method": "nvmf_create_subsystem", 00:18:15.838 "req_id": 1 00:18:15.838 } 00:18:15.838 Got JSON-RPC error response 00:18:15.838 response: 00:18:15.838 { 00:18:15.838 "code": -32602, 00:18:15.838 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:15.838 }' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:15.838 { 00:18:15.838 "nqn": "nqn.2016-06.io.spdk:cnode21914", 00:18:15.838 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:15.838 "method": "nvmf_create_subsystem", 00:18:15.838 "req_id": 1 00:18:15.838 } 00:18:15.838 Got JSON-RPC error response 00:18:15.838 response: 00:18:15.838 { 00:18:15.838 "code": -32602, 00:18:15.838 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:15.838 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14877 00:18:15.838 [2024-12-16 12:39:41.843959] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14877: invalid model number 'SPDK_Controller' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:15.838 { 00:18:15.838 "nqn": "nqn.2016-06.io.spdk:cnode14877", 00:18:15.838 "model_number": "SPDK_Controller\u001f", 00:18:15.838 "method": "nvmf_create_subsystem", 00:18:15.838 "req_id": 1 00:18:15.838 } 00:18:15.838 Got JSON-RPC error response 00:18:15.838 response: 00:18:15.838 { 00:18:15.838 "code": -32602, 00:18:15.838 "message": "Invalid MN SPDK_Controller\u001f" 00:18:15.838 }' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:15.838 { 00:18:15.838 "nqn": "nqn.2016-06.io.spdk:cnode14877", 00:18:15.838 "model_number": "SPDK_Controller\u001f", 00:18:15.838 "method": "nvmf_create_subsystem", 00:18:15.838 "req_id": 1 00:18:15.838 } 00:18:15.838 Got JSON-RPC error response 00:18:15.838 response: 00:18:15.838 { 00:18:15.838 "code": -32602, 00:18:15.838 "message": "Invalid MN SPDK_Controller\u001f" 00:18:15.838 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:15.838 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:16.096 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:16.096 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:16.096 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.096 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.096 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:16.096 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8!/m@vjn>wEwiY2`PK#1[' 00:18:16.097 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8!/m@vjn>wEwiY2`PK#1[' nqn.2016-06.io.spdk:cnode17448 00:18:16.355 [2024-12-16 12:39:42.181109] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17448: invalid serial number '8!/m@vjn>wEwiY2`PK#1[' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:16.355 { 00:18:16.355 "nqn": "nqn.2016-06.io.spdk:cnode17448", 00:18:16.355 "serial_number": "8!/m@vjn>wEwiY2`PK#1[", 00:18:16.355 "method": "nvmf_create_subsystem", 00:18:16.355 "req_id": 1 00:18:16.355 } 00:18:16.355 Got JSON-RPC error response 00:18:16.355 response: 00:18:16.355 { 00:18:16.355 "code": -32602, 00:18:16.355 "message": "Invalid SN 8!/m@vjn>wEwiY2`PK#1[" 00:18:16.355 }' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:16.355 { 00:18:16.355 "nqn": "nqn.2016-06.io.spdk:cnode17448", 00:18:16.355 "serial_number": "8!/m@vjn>wEwiY2`PK#1[", 00:18:16.355 "method": "nvmf_create_subsystem", 00:18:16.355 "req_id": 1 00:18:16.355 } 00:18:16.355 Got JSON-RPC error response 00:18:16.355 response: 00:18:16.355 { 00:18:16.355 "code": -32602, 00:18:16.355 "message": "Invalid SN 8!/m@vjn>wEwiY2`PK#1[" 00:18:16.355 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:16.355 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.356 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:16.614 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1LW_R8=jiV=\|nlgrR(WM"FicXjkF,' 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1LW_R8=jiV=\|nlgrR(WM"FicXjkF,' nqn.2016-06.io.spdk:cnode16611 00:18:16.615 [2024-12-16 12:39:42.638615] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16611: invalid model number '1LW_R8=jiV=\|nlgrR(WM"FicXjkF,' 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:16.615 { 00:18:16.615 "nqn": "nqn.2016-06.io.spdk:cnode16611", 00:18:16.615 "model_number": "1LW_R8=jiV=\\|nlgrR(WM\"FicXjkF,", 00:18:16.615 "method": "nvmf_create_subsystem", 00:18:16.615 "req_id": 1 00:18:16.615 } 00:18:16.615 Got JSON-RPC error response 00:18:16.615 response: 00:18:16.615 { 00:18:16.615 "code": -32602, 00:18:16.615 "message": "Invalid MN 1LW_R8=jiV=\\|nlgrR(WM\"FicXjkF," 00:18:16.615 }' 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:16.615 { 00:18:16.615 "nqn": "nqn.2016-06.io.spdk:cnode16611", 00:18:16.615 "model_number": "1LW_R8=jiV=\\|nlgrR(WM\"FicXjkF,", 00:18:16.615 "method": "nvmf_create_subsystem", 00:18:16.615 "req_id": 1 00:18:16.615 } 00:18:16.615 Got JSON-RPC error response 00:18:16.615 response: 00:18:16.615 { 00:18:16.615 "code": -32602, 00:18:16.615 "message": "Invalid MN 1LW_R8=jiV=\\|nlgrR(WM\"FicXjkF," 00:18:16.615 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:16.615 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:16.872 [2024-12-16 12:39:42.851384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.872 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:17.129 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:17.129 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:17.129 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:17.129 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:17.129 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:17.386 [2024-12-16 12:39:43.264734] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:17.386 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:17.386 { 00:18:17.386 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:17.386 "listen_address": { 00:18:17.386 "trtype": "tcp", 00:18:17.386 "traddr": "", 00:18:17.386 "trsvcid": "4421" 00:18:17.386 }, 00:18:17.386 "method": "nvmf_subsystem_remove_listener", 00:18:17.386 "req_id": 1 00:18:17.386 } 00:18:17.386 Got JSON-RPC error response 00:18:17.386 response: 00:18:17.386 { 00:18:17.386 "code": -32602, 00:18:17.386 "message": "Invalid parameters" 00:18:17.386 }' 00:18:17.386 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:17.387 { 00:18:17.387 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:17.387 "listen_address": { 00:18:17.387 "trtype": "tcp", 00:18:17.387 "traddr": "", 00:18:17.387 "trsvcid": "4421" 00:18:17.387 }, 00:18:17.387 "method": "nvmf_subsystem_remove_listener", 00:18:17.387 "req_id": 1 00:18:17.387 } 00:18:17.387 Got JSON-RPC error response 00:18:17.387 response: 00:18:17.387 { 00:18:17.387 "code": -32602, 00:18:17.387 "message": "Invalid parameters" 00:18:17.387 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:17.387 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5500 -i 0 00:18:17.644 [2024-12-16 12:39:43.461392] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5500: invalid cntlid range [0-65519] 00:18:17.644 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:17.644 { 00:18:17.644 "nqn": "nqn.2016-06.io.spdk:cnode5500", 00:18:17.644 "min_cntlid": 0, 00:18:17.644 "method": "nvmf_create_subsystem", 00:18:17.644 "req_id": 1 00:18:17.644 } 00:18:17.644 Got JSON-RPC error response 00:18:17.644 response: 00:18:17.644 { 00:18:17.644 "code": -32602, 00:18:17.644 "message": "Invalid cntlid range [0-65519]" 00:18:17.644 }' 00:18:17.644 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:17.644 { 00:18:17.644 "nqn": "nqn.2016-06.io.spdk:cnode5500", 00:18:17.644 "min_cntlid": 0, 00:18:17.644 "method": "nvmf_create_subsystem", 00:18:17.644 "req_id": 1 00:18:17.644 } 00:18:17.644 Got JSON-RPC error response 00:18:17.644 response: 00:18:17.644 { 00:18:17.644 "code": -32602, 00:18:17.644 "message": "Invalid cntlid range [0-65519]" 00:18:17.644 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:17.644 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24249 -i 65520 00:18:17.644 [2024-12-16 12:39:43.658049] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24249: invalid cntlid range [65520-65519] 00:18:17.644 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:17.644 { 00:18:17.644 "nqn": "nqn.2016-06.io.spdk:cnode24249", 00:18:17.644 "min_cntlid": 65520, 00:18:17.644 "method": "nvmf_create_subsystem", 00:18:17.644 "req_id": 1 00:18:17.644 } 00:18:17.644 Got JSON-RPC error response 00:18:17.644 response: 00:18:17.644 { 00:18:17.644 "code": -32602, 00:18:17.644 "message": "Invalid cntlid range [65520-65519]" 00:18:17.644 }' 00:18:17.644 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:17.644 { 00:18:17.644 "nqn": "nqn.2016-06.io.spdk:cnode24249", 00:18:17.644 "min_cntlid": 65520, 00:18:17.644 "method": "nvmf_create_subsystem", 00:18:17.644 "req_id": 1 00:18:17.644 } 00:18:17.644 Got JSON-RPC error response 00:18:17.644 response: 00:18:17.644 { 00:18:17.644 "code": -32602, 00:18:17.644 "message": "Invalid cntlid range [65520-65519]" 00:18:17.644 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:17.644 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31299 -I 0 00:18:17.903 [2024-12-16 12:39:43.842653] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31299: invalid cntlid range [1-0] 00:18:17.903 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:17.903 { 00:18:17.903 "nqn": "nqn.2016-06.io.spdk:cnode31299", 00:18:17.903 "max_cntlid": 0, 00:18:17.903 "method": "nvmf_create_subsystem", 00:18:17.903 "req_id": 1 00:18:17.903 } 00:18:17.903 Got JSON-RPC error response 00:18:17.903 response: 00:18:17.903 { 00:18:17.903 "code": -32602, 00:18:17.903 "message": "Invalid cntlid range [1-0]" 00:18:17.903 }' 00:18:17.903 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:17.903 { 00:18:17.903 "nqn": "nqn.2016-06.io.spdk:cnode31299", 00:18:17.903 "max_cntlid": 0, 00:18:17.903 "method": "nvmf_create_subsystem", 00:18:17.903 "req_id": 1 00:18:17.903 } 00:18:17.903 Got JSON-RPC error response 00:18:17.903 response: 00:18:17.903 { 00:18:17.903 "code": -32602, 00:18:17.903 "message": "Invalid cntlid range [1-0]" 00:18:17.903 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:17.903 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode90 -I 65520 00:18:18.160 [2024-12-16 12:39:44.047369] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode90: invalid cntlid range [1-65520] 00:18:18.160 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:18.160 { 00:18:18.160 "nqn": "nqn.2016-06.io.spdk:cnode90", 00:18:18.160 "max_cntlid": 65520, 00:18:18.160 "method": "nvmf_create_subsystem", 00:18:18.160 "req_id": 1 00:18:18.160 } 00:18:18.160 Got JSON-RPC error response 00:18:18.160 response: 00:18:18.160 { 00:18:18.160 "code": -32602, 00:18:18.160 "message": "Invalid cntlid range [1-65520]" 00:18:18.160 }' 00:18:18.160 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:18.160 { 00:18:18.160 "nqn": "nqn.2016-06.io.spdk:cnode90", 00:18:18.160 "max_cntlid": 65520, 00:18:18.160 "method": "nvmf_create_subsystem", 00:18:18.160 "req_id": 1 00:18:18.160 } 00:18:18.160 Got JSON-RPC error response 00:18:18.160 response: 00:18:18.160 { 00:18:18.160 "code": -32602, 00:18:18.160 "message": "Invalid cntlid range [1-65520]" 00:18:18.160 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:18.160 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5915 -i 6 -I 5 00:18:18.418 [2024-12-16 12:39:44.268157] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5915: invalid cntlid range [6-5] 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:18.418 { 00:18:18.418 "nqn": "nqn.2016-06.io.spdk:cnode5915", 00:18:18.418 "min_cntlid": 6, 00:18:18.418 "max_cntlid": 5, 00:18:18.418 "method": "nvmf_create_subsystem", 00:18:18.418 "req_id": 1 00:18:18.418 } 00:18:18.418 Got JSON-RPC error response 00:18:18.418 response: 00:18:18.418 { 00:18:18.418 "code": -32602, 00:18:18.418 "message": "Invalid cntlid range [6-5]" 00:18:18.418 }' 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:18.418 { 00:18:18.418 "nqn": "nqn.2016-06.io.spdk:cnode5915", 00:18:18.418 "min_cntlid": 6, 00:18:18.418 "max_cntlid": 5, 00:18:18.418 "method": "nvmf_create_subsystem", 00:18:18.418 "req_id": 1 00:18:18.418 } 00:18:18.418 Got JSON-RPC error response 00:18:18.418 response: 00:18:18.418 { 00:18:18.418 "code": -32602, 00:18:18.418 "message": "Invalid cntlid range [6-5]" 00:18:18.418 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:18.418 { 00:18:18.418 "name": "foobar", 00:18:18.418 "method": "nvmf_delete_target", 00:18:18.418 "req_id": 1 00:18:18.418 } 00:18:18.418 Got JSON-RPC error response 00:18:18.418 response: 00:18:18.418 { 00:18:18.418 "code": -32602, 00:18:18.418 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:18.418 }' 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:18.418 { 00:18:18.418 "name": "foobar", 00:18:18.418 "method": "nvmf_delete_target", 00:18:18.418 "req_id": 1 00:18:18.418 } 00:18:18.418 Got JSON-RPC error response 00:18:18.418 response: 00:18:18.418 { 00:18:18.418 "code": -32602, 00:18:18.418 "message": "The specified target doesn't exist, cannot delete it." 00:18:18.418 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.418 rmmod nvme_tcp 00:18:18.418 rmmod nvme_fabrics 00:18:18.418 rmmod nvme_keyring 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 324347 ']' 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 324347 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 324347 ']' 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 324347 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.418 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 324347 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 324347' 00:18:18.677 killing process with pid 324347 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 324347 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 324347 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.677 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:21.240 00:18:21.240 real 0m11.945s 00:18:21.240 user 0m18.428s 00:18:21.240 sys 0m5.385s 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:21.240 ************************************ 00:18:21.240 END TEST nvmf_invalid 00:18:21.240 ************************************ 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.240 ************************************ 00:18:21.240 START TEST nvmf_connect_stress 00:18:21.240 ************************************ 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:21.240 * Looking for test storage... 00:18:21.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:21.240 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.241 --rc genhtml_branch_coverage=1 00:18:21.241 --rc genhtml_function_coverage=1 00:18:21.241 --rc genhtml_legend=1 00:18:21.241 --rc geninfo_all_blocks=1 00:18:21.241 --rc geninfo_unexecuted_blocks=1 00:18:21.241 00:18:21.241 ' 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.241 --rc genhtml_branch_coverage=1 00:18:21.241 --rc genhtml_function_coverage=1 00:18:21.241 --rc genhtml_legend=1 00:18:21.241 --rc geninfo_all_blocks=1 00:18:21.241 --rc geninfo_unexecuted_blocks=1 00:18:21.241 00:18:21.241 ' 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.241 --rc genhtml_branch_coverage=1 00:18:21.241 --rc genhtml_function_coverage=1 00:18:21.241 --rc genhtml_legend=1 00:18:21.241 --rc geninfo_all_blocks=1 00:18:21.241 --rc geninfo_unexecuted_blocks=1 00:18:21.241 00:18:21.241 ' 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.241 --rc genhtml_branch_coverage=1 00:18:21.241 --rc genhtml_function_coverage=1 00:18:21.241 --rc genhtml_legend=1 00:18:21.241 --rc geninfo_all_blocks=1 00:18:21.241 --rc geninfo_unexecuted_blocks=1 00:18:21.241 00:18:21.241 ' 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.241 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:21.241 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.555 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:26.817 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:26.817 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:26.817 Found net devices under 0000:af:00.0: cvl_0_0 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:26.817 Found net devices under 0000:af:00.1: cvl_0_1 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:26.817 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:26.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:18:26.818 00:18:26.818 --- 10.0.0.2 ping statistics --- 00:18:26.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.818 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:18:26.818 00:18:26.818 --- 10.0.0.1 ping statistics --- 00:18:26.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.818 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:26.818 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=328667 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 328667 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 328667 ']' 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.082 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 [2024-12-16 12:39:52.956437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:27.082 [2024-12-16 12:39:52.956480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.082 [2024-12-16 12:39:53.028948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.082 [2024-12-16 12:39:53.067345] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.082 [2024-12-16 12:39:53.067386] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.082 [2024-12-16 12:39:53.067394] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.082 [2024-12-16 12:39:53.067399] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.082 [2024-12-16 12:39:53.067405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.082 [2024-12-16 12:39:53.067521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.082 [2024-12-16 12:39:53.067643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.082 [2024-12-16 12:39:53.067644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 [2024-12-16 12:39:53.209857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 [2024-12-16 12:39:53.253416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 NULL1 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=328706 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.346 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.347 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.615 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.615 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:27.615 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.615 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.878 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.136 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.136 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:28.136 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.136 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.136 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.396 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.396 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:28.396 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.396 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.396 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.656 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.656 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:28.656 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.656 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.656 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.915 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.915 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:28.915 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.915 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.915 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.535 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.535 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:29.535 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.535 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.535 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.793 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.793 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:29.793 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.793 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.793 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.057 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.057 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:30.057 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.057 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.057 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.321 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.321 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:30.321 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.321 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.321 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.580 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.580 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:30.580 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.580 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.580 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.153 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.153 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:31.153 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.153 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.153 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.420 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.420 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:31.420 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.420 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.420 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.681 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.681 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:31.681 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.681 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.681 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.941 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.941 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:31.941 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.941 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.941 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.204 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.204 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:32.204 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.204 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.204 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.781 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.781 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:32.781 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.781 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.781 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.042 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.042 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:33.042 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.042 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.042 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.303 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.303 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:33.303 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.303 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.303 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.567 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.567 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:33.567 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.567 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.567 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.830 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.830 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:33.830 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.830 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.830 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.411 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.411 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:34.411 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.411 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.411 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.673 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.673 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:34.673 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.673 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.673 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.935 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:34.935 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.935 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.935 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.199 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.199 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:35.199 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.199 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.199 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.462 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.462 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:35.462 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.462 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.462 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.037 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.037 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:36.037 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.037 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.037 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.304 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.304 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:36.304 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.304 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.304 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.567 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.567 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:36.567 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.567 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.567 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.829 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.829 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:36.829 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.829 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.829 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.100 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.101 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:37.101 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.101 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.101 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.380 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.380 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.380 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 328706 00:18:37.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (328706) - No such process 00:18:37.380 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 328706 00:18:37.380 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:37.652 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:37.653 rmmod nvme_tcp 00:18:37.653 rmmod nvme_fabrics 00:18:37.653 rmmod nvme_keyring 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 328667 ']' 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 328667 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 328667 ']' 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 328667 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 328667 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 328667' 00:18:37.653 killing process with pid 328667 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 328667 00:18:37.653 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 328667 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.930 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.932 00:18:39.932 real 0m19.002s 00:18:39.932 user 0m41.489s 00:18:39.932 sys 0m6.700s 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.932 ************************************ 00:18:39.932 END TEST nvmf_connect_stress 00:18:39.932 ************************************ 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.932 ************************************ 00:18:39.932 START TEST nvmf_fused_ordering 00:18:39.932 ************************************ 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:39.932 * Looking for test storage... 00:18:39.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:39.932 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.205 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.206 --rc genhtml_branch_coverage=1 00:18:40.206 --rc genhtml_function_coverage=1 00:18:40.206 --rc genhtml_legend=1 00:18:40.206 --rc geninfo_all_blocks=1 00:18:40.206 --rc geninfo_unexecuted_blocks=1 00:18:40.206 00:18:40.206 ' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.206 --rc genhtml_branch_coverage=1 00:18:40.206 --rc genhtml_function_coverage=1 00:18:40.206 --rc genhtml_legend=1 00:18:40.206 --rc geninfo_all_blocks=1 00:18:40.206 --rc geninfo_unexecuted_blocks=1 00:18:40.206 00:18:40.206 ' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.206 --rc genhtml_branch_coverage=1 00:18:40.206 --rc genhtml_function_coverage=1 00:18:40.206 --rc genhtml_legend=1 00:18:40.206 --rc geninfo_all_blocks=1 00:18:40.206 --rc geninfo_unexecuted_blocks=1 00:18:40.206 00:18:40.206 ' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.206 --rc genhtml_branch_coverage=1 00:18:40.206 --rc genhtml_function_coverage=1 00:18:40.206 --rc genhtml_legend=1 00:18:40.206 --rc geninfo_all_blocks=1 00:18:40.206 --rc geninfo_unexecuted_blocks=1 00:18:40.206 00:18:40.206 ' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.206 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:45.614 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:45.614 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:45.614 Found net devices under 0000:af:00.0: cvl_0_0 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:45.614 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.876 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:45.876 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.876 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:45.876 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:45.876 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.876 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:45.876 Found net devices under 0000:af:00.1: cvl_0_1 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:18:45.877 00:18:45.877 --- 10.0.0.2 ping statistics --- 00:18:45.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.877 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:18:45.877 00:18:45.877 --- 10.0.0.1 ping statistics --- 00:18:45.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.877 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:45.877 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=333810 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 333810 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 333810 ']' 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.142 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.142 [2024-12-16 12:40:12.010107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:46.142 [2024-12-16 12:40:12.010166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.142 [2024-12-16 12:40:12.083337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.142 [2024-12-16 12:40:12.122103] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.142 [2024-12-16 12:40:12.122145] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.142 [2024-12-16 12:40:12.122152] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.142 [2024-12-16 12:40:12.122158] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.142 [2024-12-16 12:40:12.122163] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.142 [2024-12-16 12:40:12.122180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 [2024-12-16 12:40:12.247275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 [2024-12-16 12:40:12.263446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 NULL1 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.411 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:46.411 [2024-12-16 12:40:12.318703] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:46.411 [2024-12-16 12:40:12.318748] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333884 ] 00:18:47.001 Attached to nqn.2016-06.io.spdk:cnode1 00:18:47.001 Namespace ID: 1 size: 1GB 00:18:47.001 fused_ordering(0) 00:18:47.001 fused_ordering(1) 00:18:47.001 fused_ordering(2) 00:18:47.001 fused_ordering(3) 00:18:47.001 fused_ordering(4) 00:18:47.001 fused_ordering(5) 00:18:47.001 fused_ordering(6) 00:18:47.001 fused_ordering(7) 00:18:47.001 fused_ordering(8) 00:18:47.001 fused_ordering(9) 00:18:47.001 fused_ordering(10) 00:18:47.001 fused_ordering(11) 00:18:47.001 fused_ordering(12) 00:18:47.001 fused_ordering(13) 00:18:47.001 fused_ordering(14) 00:18:47.001 fused_ordering(15) 00:18:47.001 fused_ordering(16) 00:18:47.001 fused_ordering(17) 00:18:47.001 fused_ordering(18) 00:18:47.001 fused_ordering(19) 00:18:47.001 fused_ordering(20) 00:18:47.001 fused_ordering(21) 00:18:47.001 fused_ordering(22) 00:18:47.001 fused_ordering(23) 00:18:47.001 fused_ordering(24) 00:18:47.001 fused_ordering(25) 00:18:47.001 fused_ordering(26) 00:18:47.001 fused_ordering(27) 00:18:47.001 fused_ordering(28) 00:18:47.001 fused_ordering(29) 00:18:47.001 fused_ordering(30) 00:18:47.001 fused_ordering(31) 00:18:47.001 fused_ordering(32) 00:18:47.001 fused_ordering(33) 00:18:47.001 fused_ordering(34) 00:18:47.001 fused_ordering(35) 00:18:47.001 fused_ordering(36) 00:18:47.001 fused_ordering(37) 00:18:47.001 fused_ordering(38) 00:18:47.001 fused_ordering(39) 00:18:47.001 fused_ordering(40) 00:18:47.001 fused_ordering(41) 00:18:47.001 fused_ordering(42) 00:18:47.001 fused_ordering(43) 00:18:47.001 fused_ordering(44) 00:18:47.001 fused_ordering(45) 00:18:47.001 fused_ordering(46) 00:18:47.001 fused_ordering(47) 00:18:47.001 fused_ordering(48) 00:18:47.001 fused_ordering(49) 00:18:47.001 fused_ordering(50) 00:18:47.001 fused_ordering(51) 00:18:47.001 fused_ordering(52) 00:18:47.002 fused_ordering(53) 00:18:47.002 fused_ordering(54) 00:18:47.002 fused_ordering(55) 00:18:47.002 fused_ordering(56) 00:18:47.002 fused_ordering(57) 00:18:47.002 fused_ordering(58) 00:18:47.002 fused_ordering(59) 00:18:47.002 fused_ordering(60) 00:18:47.002 fused_ordering(61) 00:18:47.002 fused_ordering(62) 00:18:47.002 fused_ordering(63) 00:18:47.002 fused_ordering(64) 00:18:47.002 fused_ordering(65) 00:18:47.002 fused_ordering(66) 00:18:47.002 fused_ordering(67) 00:18:47.002 fused_ordering(68) 00:18:47.002 fused_ordering(69) 00:18:47.002 fused_ordering(70) 00:18:47.002 fused_ordering(71) 00:18:47.002 fused_ordering(72) 00:18:47.002 fused_ordering(73) 00:18:47.002 fused_ordering(74) 00:18:47.002 fused_ordering(75) 00:18:47.002 fused_ordering(76) 00:18:47.002 fused_ordering(77) 00:18:47.002 fused_ordering(78) 00:18:47.002 fused_ordering(79) 00:18:47.002 fused_ordering(80) 00:18:47.002 fused_ordering(81) 00:18:47.002 fused_ordering(82) 00:18:47.002 fused_ordering(83) 00:18:47.002 fused_ordering(84) 00:18:47.002 fused_ordering(85) 00:18:47.002 fused_ordering(86) 00:18:47.002 fused_ordering(87) 00:18:47.002 fused_ordering(88) 00:18:47.002 fused_ordering(89) 00:18:47.002 fused_ordering(90) 00:18:47.002 fused_ordering(91) 00:18:47.002 fused_ordering(92) 00:18:47.002 fused_ordering(93) 00:18:47.002 fused_ordering(94) 00:18:47.002 fused_ordering(95) 00:18:47.002 fused_ordering(96) 00:18:47.002 fused_ordering(97) 00:18:47.002 fused_ordering(98) 00:18:47.002 fused_ordering(99) 00:18:47.002 fused_ordering(100) 00:18:47.002 fused_ordering(101) 00:18:47.002 fused_ordering(102) 00:18:47.002 fused_ordering(103) 00:18:47.002 fused_ordering(104) 00:18:47.002 fused_ordering(105) 00:18:47.002 fused_ordering(106) 00:18:47.002 fused_ordering(107) 00:18:47.002 fused_ordering(108) 00:18:47.002 fused_ordering(109) 00:18:47.002 fused_ordering(110) 00:18:47.002 fused_ordering(111) 00:18:47.002 fused_ordering(112) 00:18:47.002 fused_ordering(113) 00:18:47.002 fused_ordering(114) 00:18:47.002 fused_ordering(115) 00:18:47.002 fused_ordering(116) 00:18:47.002 fused_ordering(117) 00:18:47.002 fused_ordering(118) 00:18:47.002 fused_ordering(119) 00:18:47.002 fused_ordering(120) 00:18:47.002 fused_ordering(121) 00:18:47.002 fused_ordering(122) 00:18:47.002 fused_ordering(123) 00:18:47.002 fused_ordering(124) 00:18:47.002 fused_ordering(125) 00:18:47.002 fused_ordering(126) 00:18:47.002 fused_ordering(127) 00:18:47.002 fused_ordering(128) 00:18:47.002 fused_ordering(129) 00:18:47.002 fused_ordering(130) 00:18:47.002 fused_ordering(131) 00:18:47.002 fused_ordering(132) 00:18:47.002 fused_ordering(133) 00:18:47.002 fused_ordering(134) 00:18:47.002 fused_ordering(135) 00:18:47.002 fused_ordering(136) 00:18:47.002 fused_ordering(137) 00:18:47.002 fused_ordering(138) 00:18:47.002 fused_ordering(139) 00:18:47.002 fused_ordering(140) 00:18:47.002 fused_ordering(141) 00:18:47.002 fused_ordering(142) 00:18:47.002 fused_ordering(143) 00:18:47.002 fused_ordering(144) 00:18:47.002 fused_ordering(145) 00:18:47.002 fused_ordering(146) 00:18:47.002 fused_ordering(147) 00:18:47.002 fused_ordering(148) 00:18:47.002 fused_ordering(149) 00:18:47.002 fused_ordering(150) 00:18:47.002 fused_ordering(151) 00:18:47.002 fused_ordering(152) 00:18:47.002 fused_ordering(153) 00:18:47.002 fused_ordering(154) 00:18:47.002 fused_ordering(155) 00:18:47.002 fused_ordering(156) 00:18:47.002 fused_ordering(157) 00:18:47.002 fused_ordering(158) 00:18:47.002 fused_ordering(159) 00:18:47.002 fused_ordering(160) 00:18:47.002 fused_ordering(161) 00:18:47.002 fused_ordering(162) 00:18:47.002 fused_ordering(163) 00:18:47.002 fused_ordering(164) 00:18:47.002 fused_ordering(165) 00:18:47.002 fused_ordering(166) 00:18:47.002 fused_ordering(167) 00:18:47.002 fused_ordering(168) 00:18:47.002 fused_ordering(169) 00:18:47.002 fused_ordering(170) 00:18:47.002 fused_ordering(171) 00:18:47.002 fused_ordering(172) 00:18:47.002 fused_ordering(173) 00:18:47.002 fused_ordering(174) 00:18:47.002 fused_ordering(175) 00:18:47.002 fused_ordering(176) 00:18:47.002 fused_ordering(177) 00:18:47.002 fused_ordering(178) 00:18:47.002 fused_ordering(179) 00:18:47.002 fused_ordering(180) 00:18:47.002 fused_ordering(181) 00:18:47.002 fused_ordering(182) 00:18:47.002 fused_ordering(183) 00:18:47.002 fused_ordering(184) 00:18:47.002 fused_ordering(185) 00:18:47.002 fused_ordering(186) 00:18:47.002 fused_ordering(187) 00:18:47.002 fused_ordering(188) 00:18:47.002 fused_ordering(189) 00:18:47.002 fused_ordering(190) 00:18:47.002 fused_ordering(191) 00:18:47.002 fused_ordering(192) 00:18:47.002 fused_ordering(193) 00:18:47.002 fused_ordering(194) 00:18:47.002 fused_ordering(195) 00:18:47.002 fused_ordering(196) 00:18:47.002 fused_ordering(197) 00:18:47.002 fused_ordering(198) 00:18:47.002 fused_ordering(199) 00:18:47.002 fused_ordering(200) 00:18:47.002 fused_ordering(201) 00:18:47.002 fused_ordering(202) 00:18:47.002 fused_ordering(203) 00:18:47.002 fused_ordering(204) 00:18:47.002 fused_ordering(205) 00:18:47.002 fused_ordering(206) 00:18:47.002 fused_ordering(207) 00:18:47.002 fused_ordering(208) 00:18:47.002 fused_ordering(209) 00:18:47.002 fused_ordering(210) 00:18:47.002 fused_ordering(211) 00:18:47.002 fused_ordering(212) 00:18:47.002 fused_ordering(213) 00:18:47.002 fused_ordering(214) 00:18:47.002 fused_ordering(215) 00:18:47.002 fused_ordering(216) 00:18:47.002 fused_ordering(217) 00:18:47.002 fused_ordering(218) 00:18:47.002 fused_ordering(219) 00:18:47.002 fused_ordering(220) 00:18:47.002 fused_ordering(221) 00:18:47.002 fused_ordering(222) 00:18:47.002 fused_ordering(223) 00:18:47.002 fused_ordering(224) 00:18:47.002 fused_ordering(225) 00:18:47.002 fused_ordering(226) 00:18:47.002 fused_ordering(227) 00:18:47.002 fused_ordering(228) 00:18:47.002 fused_ordering(229) 00:18:47.002 fused_ordering(230) 00:18:47.002 fused_ordering(231) 00:18:47.002 fused_ordering(232) 00:18:47.002 fused_ordering(233) 00:18:47.002 fused_ordering(234) 00:18:47.002 fused_ordering(235) 00:18:47.002 fused_ordering(236) 00:18:47.002 fused_ordering(237) 00:18:47.002 fused_ordering(238) 00:18:47.002 fused_ordering(239) 00:18:47.002 fused_ordering(240) 00:18:47.002 fused_ordering(241) 00:18:47.002 fused_ordering(242) 00:18:47.002 fused_ordering(243) 00:18:47.002 fused_ordering(244) 00:18:47.002 fused_ordering(245) 00:18:47.002 fused_ordering(246) 00:18:47.002 fused_ordering(247) 00:18:47.002 fused_ordering(248) 00:18:47.002 fused_ordering(249) 00:18:47.002 fused_ordering(250) 00:18:47.002 fused_ordering(251) 00:18:47.002 fused_ordering(252) 00:18:47.002 fused_ordering(253) 00:18:47.002 fused_ordering(254) 00:18:47.002 fused_ordering(255) 00:18:47.002 fused_ordering(256) 00:18:47.002 fused_ordering(257) 00:18:47.002 fused_ordering(258) 00:18:47.002 fused_ordering(259) 00:18:47.002 fused_ordering(260) 00:18:47.002 fused_ordering(261) 00:18:47.002 fused_ordering(262) 00:18:47.002 fused_ordering(263) 00:18:47.002 fused_ordering(264) 00:18:47.002 fused_ordering(265) 00:18:47.002 fused_ordering(266) 00:18:47.002 fused_ordering(267) 00:18:47.002 fused_ordering(268) 00:18:47.002 fused_ordering(269) 00:18:47.002 fused_ordering(270) 00:18:47.002 fused_ordering(271) 00:18:47.002 fused_ordering(272) 00:18:47.002 fused_ordering(273) 00:18:47.002 fused_ordering(274) 00:18:47.002 fused_ordering(275) 00:18:47.002 fused_ordering(276) 00:18:47.002 fused_ordering(277) 00:18:47.002 fused_ordering(278) 00:18:47.002 fused_ordering(279) 00:18:47.002 fused_ordering(280) 00:18:47.002 fused_ordering(281) 00:18:47.002 fused_ordering(282) 00:18:47.002 fused_ordering(283) 00:18:47.002 fused_ordering(284) 00:18:47.002 fused_ordering(285) 00:18:47.002 fused_ordering(286) 00:18:47.002 fused_ordering(287) 00:18:47.002 fused_ordering(288) 00:18:47.002 fused_ordering(289) 00:18:47.002 fused_ordering(290) 00:18:47.002 fused_ordering(291) 00:18:47.002 fused_ordering(292) 00:18:47.002 fused_ordering(293) 00:18:47.002 fused_ordering(294) 00:18:47.002 fused_ordering(295) 00:18:47.002 fused_ordering(296) 00:18:47.002 fused_ordering(297) 00:18:47.002 fused_ordering(298) 00:18:47.002 fused_ordering(299) 00:18:47.002 fused_ordering(300) 00:18:47.002 fused_ordering(301) 00:18:47.002 fused_ordering(302) 00:18:47.002 fused_ordering(303) 00:18:47.002 fused_ordering(304) 00:18:47.002 fused_ordering(305) 00:18:47.002 fused_ordering(306) 00:18:47.002 fused_ordering(307) 00:18:47.002 fused_ordering(308) 00:18:47.002 fused_ordering(309) 00:18:47.002 fused_ordering(310) 00:18:47.002 fused_ordering(311) 00:18:47.002 fused_ordering(312) 00:18:47.002 fused_ordering(313) 00:18:47.002 fused_ordering(314) 00:18:47.002 fused_ordering(315) 00:18:47.002 fused_ordering(316) 00:18:47.002 fused_ordering(317) 00:18:47.002 fused_ordering(318) 00:18:47.002 fused_ordering(319) 00:18:47.002 fused_ordering(320) 00:18:47.002 fused_ordering(321) 00:18:47.002 fused_ordering(322) 00:18:47.002 fused_ordering(323) 00:18:47.002 fused_ordering(324) 00:18:47.002 fused_ordering(325) 00:18:47.002 fused_ordering(326) 00:18:47.002 fused_ordering(327) 00:18:47.002 fused_ordering(328) 00:18:47.002 fused_ordering(329) 00:18:47.002 fused_ordering(330) 00:18:47.002 fused_ordering(331) 00:18:47.002 fused_ordering(332) 00:18:47.002 fused_ordering(333) 00:18:47.002 fused_ordering(334) 00:18:47.002 fused_ordering(335) 00:18:47.002 fused_ordering(336) 00:18:47.002 fused_ordering(337) 00:18:47.002 fused_ordering(338) 00:18:47.002 fused_ordering(339) 00:18:47.002 fused_ordering(340) 00:18:47.002 fused_ordering(341) 00:18:47.002 fused_ordering(342) 00:18:47.003 fused_ordering(343) 00:18:47.003 fused_ordering(344) 00:18:47.003 fused_ordering(345) 00:18:47.003 fused_ordering(346) 00:18:47.003 fused_ordering(347) 00:18:47.003 fused_ordering(348) 00:18:47.003 fused_ordering(349) 00:18:47.003 fused_ordering(350) 00:18:47.003 fused_ordering(351) 00:18:47.003 fused_ordering(352) 00:18:47.003 fused_ordering(353) 00:18:47.003 fused_ordering(354) 00:18:47.003 fused_ordering(355) 00:18:47.003 fused_ordering(356) 00:18:47.003 fused_ordering(357) 00:18:47.003 fused_ordering(358) 00:18:47.003 fused_ordering(359) 00:18:47.003 fused_ordering(360) 00:18:47.003 fused_ordering(361) 00:18:47.003 fused_ordering(362) 00:18:47.003 fused_ordering(363) 00:18:47.003 fused_ordering(364) 00:18:47.003 fused_ordering(365) 00:18:47.003 fused_ordering(366) 00:18:47.003 fused_ordering(367) 00:18:47.003 fused_ordering(368) 00:18:47.003 fused_ordering(369) 00:18:47.003 fused_ordering(370) 00:18:47.003 fused_ordering(371) 00:18:47.003 fused_ordering(372) 00:18:47.003 fused_ordering(373) 00:18:47.003 fused_ordering(374) 00:18:47.003 fused_ordering(375) 00:18:47.003 fused_ordering(376) 00:18:47.003 fused_ordering(377) 00:18:47.003 fused_ordering(378) 00:18:47.003 fused_ordering(379) 00:18:47.003 fused_ordering(380) 00:18:47.003 fused_ordering(381) 00:18:47.003 fused_ordering(382) 00:18:47.003 fused_ordering(383) 00:18:47.003 fused_ordering(384) 00:18:47.003 fused_ordering(385) 00:18:47.003 fused_ordering(386) 00:18:47.003 fused_ordering(387) 00:18:47.003 fused_ordering(388) 00:18:47.003 fused_ordering(389) 00:18:47.003 fused_ordering(390) 00:18:47.003 fused_ordering(391) 00:18:47.003 fused_ordering(392) 00:18:47.003 fused_ordering(393) 00:18:47.003 fused_ordering(394) 00:18:47.003 fused_ordering(395) 00:18:47.003 fused_ordering(396) 00:18:47.003 fused_ordering(397) 00:18:47.003 fused_ordering(398) 00:18:47.003 fused_ordering(399) 00:18:47.003 fused_ordering(400) 00:18:47.003 fused_ordering(401) 00:18:47.003 fused_ordering(402) 00:18:47.003 fused_ordering(403) 00:18:47.003 fused_ordering(404) 00:18:47.003 fused_ordering(405) 00:18:47.003 fused_ordering(406) 00:18:47.003 fused_ordering(407) 00:18:47.003 fused_ordering(408) 00:18:47.003 fused_ordering(409) 00:18:47.003 fused_ordering(410) 00:18:47.268 fused_ordering(411) 00:18:47.268 fused_ordering(412) 00:18:47.268 fused_ordering(413) 00:18:47.268 fused_ordering(414) 00:18:47.268 fused_ordering(415) 00:18:47.268 fused_ordering(416) 00:18:47.268 fused_ordering(417) 00:18:47.268 fused_ordering(418) 00:18:47.268 fused_ordering(419) 00:18:47.268 fused_ordering(420) 00:18:47.268 fused_ordering(421) 00:18:47.268 fused_ordering(422) 00:18:47.268 fused_ordering(423) 00:18:47.268 fused_ordering(424) 00:18:47.268 fused_ordering(425) 00:18:47.268 fused_ordering(426) 00:18:47.268 fused_ordering(427) 00:18:47.268 fused_ordering(428) 00:18:47.268 fused_ordering(429) 00:18:47.268 fused_ordering(430) 00:18:47.268 fused_ordering(431) 00:18:47.268 fused_ordering(432) 00:18:47.268 fused_ordering(433) 00:18:47.268 fused_ordering(434) 00:18:47.268 fused_ordering(435) 00:18:47.268 fused_ordering(436) 00:18:47.268 fused_ordering(437) 00:18:47.268 fused_ordering(438) 00:18:47.268 fused_ordering(439) 00:18:47.268 fused_ordering(440) 00:18:47.268 fused_ordering(441) 00:18:47.268 fused_ordering(442) 00:18:47.268 fused_ordering(443) 00:18:47.268 fused_ordering(444) 00:18:47.268 fused_ordering(445) 00:18:47.268 fused_ordering(446) 00:18:47.268 fused_ordering(447) 00:18:47.268 fused_ordering(448) 00:18:47.268 fused_ordering(449) 00:18:47.268 fused_ordering(450) 00:18:47.268 fused_ordering(451) 00:18:47.268 fused_ordering(452) 00:18:47.268 fused_ordering(453) 00:18:47.268 fused_ordering(454) 00:18:47.268 fused_ordering(455) 00:18:47.268 fused_ordering(456) 00:18:47.268 fused_ordering(457) 00:18:47.268 fused_ordering(458) 00:18:47.268 fused_ordering(459) 00:18:47.268 fused_ordering(460) 00:18:47.268 fused_ordering(461) 00:18:47.268 fused_ordering(462) 00:18:47.268 fused_ordering(463) 00:18:47.268 fused_ordering(464) 00:18:47.268 fused_ordering(465) 00:18:47.268 fused_ordering(466) 00:18:47.268 fused_ordering(467) 00:18:47.268 fused_ordering(468) 00:18:47.269 fused_ordering(469) 00:18:47.269 fused_ordering(470) 00:18:47.269 fused_ordering(471) 00:18:47.269 fused_ordering(472) 00:18:47.269 fused_ordering(473) 00:18:47.269 fused_ordering(474) 00:18:47.269 fused_ordering(475) 00:18:47.269 fused_ordering(476) 00:18:47.269 fused_ordering(477) 00:18:47.269 fused_ordering(478) 00:18:47.269 fused_ordering(479) 00:18:47.269 fused_ordering(480) 00:18:47.269 fused_ordering(481) 00:18:47.269 fused_ordering(482) 00:18:47.269 fused_ordering(483) 00:18:47.269 fused_ordering(484) 00:18:47.269 fused_ordering(485) 00:18:47.269 fused_ordering(486) 00:18:47.269 fused_ordering(487) 00:18:47.269 fused_ordering(488) 00:18:47.269 fused_ordering(489) 00:18:47.269 fused_ordering(490) 00:18:47.269 fused_ordering(491) 00:18:47.269 fused_ordering(492) 00:18:47.269 fused_ordering(493) 00:18:47.269 fused_ordering(494) 00:18:47.269 fused_ordering(495) 00:18:47.269 fused_ordering(496) 00:18:47.269 fused_ordering(497) 00:18:47.269 fused_ordering(498) 00:18:47.269 fused_ordering(499) 00:18:47.269 fused_ordering(500) 00:18:47.269 fused_ordering(501) 00:18:47.269 fused_ordering(502) 00:18:47.269 fused_ordering(503) 00:18:47.269 fused_ordering(504) 00:18:47.269 fused_ordering(505) 00:18:47.269 fused_ordering(506) 00:18:47.269 fused_ordering(507) 00:18:47.269 fused_ordering(508) 00:18:47.269 fused_ordering(509) 00:18:47.269 fused_ordering(510) 00:18:47.269 fused_ordering(511) 00:18:47.269 fused_ordering(512) 00:18:47.269 fused_ordering(513) 00:18:47.269 fused_ordering(514) 00:18:47.269 fused_ordering(515) 00:18:47.269 fused_ordering(516) 00:18:47.269 fused_ordering(517) 00:18:47.269 fused_ordering(518) 00:18:47.269 fused_ordering(519) 00:18:47.269 fused_ordering(520) 00:18:47.269 fused_ordering(521) 00:18:47.269 fused_ordering(522) 00:18:47.269 fused_ordering(523) 00:18:47.269 fused_ordering(524) 00:18:47.269 fused_ordering(525) 00:18:47.269 fused_ordering(526) 00:18:47.269 fused_ordering(527) 00:18:47.269 fused_ordering(528) 00:18:47.269 fused_ordering(529) 00:18:47.269 fused_ordering(530) 00:18:47.269 fused_ordering(531) 00:18:47.269 fused_ordering(532) 00:18:47.269 fused_ordering(533) 00:18:47.269 fused_ordering(534) 00:18:47.269 fused_ordering(535) 00:18:47.269 fused_ordering(536) 00:18:47.269 fused_ordering(537) 00:18:47.269 fused_ordering(538) 00:18:47.269 fused_ordering(539) 00:18:47.269 fused_ordering(540) 00:18:47.269 fused_ordering(541) 00:18:47.269 fused_ordering(542) 00:18:47.269 fused_ordering(543) 00:18:47.269 fused_ordering(544) 00:18:47.269 fused_ordering(545) 00:18:47.269 fused_ordering(546) 00:18:47.269 fused_ordering(547) 00:18:47.269 fused_ordering(548) 00:18:47.269 fused_ordering(549) 00:18:47.269 fused_ordering(550) 00:18:47.269 fused_ordering(551) 00:18:47.269 fused_ordering(552) 00:18:47.269 fused_ordering(553) 00:18:47.269 fused_ordering(554) 00:18:47.269 fused_ordering(555) 00:18:47.269 fused_ordering(556) 00:18:47.269 fused_ordering(557) 00:18:47.269 fused_ordering(558) 00:18:47.269 fused_ordering(559) 00:18:47.269 fused_ordering(560) 00:18:47.269 fused_ordering(561) 00:18:47.269 fused_ordering(562) 00:18:47.269 fused_ordering(563) 00:18:47.269 fused_ordering(564) 00:18:47.269 fused_ordering(565) 00:18:47.269 fused_ordering(566) 00:18:47.269 fused_ordering(567) 00:18:47.269 fused_ordering(568) 00:18:47.269 fused_ordering(569) 00:18:47.269 fused_ordering(570) 00:18:47.269 fused_ordering(571) 00:18:47.269 fused_ordering(572) 00:18:47.269 fused_ordering(573) 00:18:47.269 fused_ordering(574) 00:18:47.269 fused_ordering(575) 00:18:47.269 fused_ordering(576) 00:18:47.269 fused_ordering(577) 00:18:47.269 fused_ordering(578) 00:18:47.269 fused_ordering(579) 00:18:47.269 fused_ordering(580) 00:18:47.269 fused_ordering(581) 00:18:47.269 fused_ordering(582) 00:18:47.269 fused_ordering(583) 00:18:47.269 fused_ordering(584) 00:18:47.269 fused_ordering(585) 00:18:47.269 fused_ordering(586) 00:18:47.269 fused_ordering(587) 00:18:47.269 fused_ordering(588) 00:18:47.269 fused_ordering(589) 00:18:47.269 fused_ordering(590) 00:18:47.269 fused_ordering(591) 00:18:47.269 fused_ordering(592) 00:18:47.269 fused_ordering(593) 00:18:47.269 fused_ordering(594) 00:18:47.269 fused_ordering(595) 00:18:47.269 fused_ordering(596) 00:18:47.269 fused_ordering(597) 00:18:47.269 fused_ordering(598) 00:18:47.269 fused_ordering(599) 00:18:47.269 fused_ordering(600) 00:18:47.269 fused_ordering(601) 00:18:47.269 fused_ordering(602) 00:18:47.269 fused_ordering(603) 00:18:47.269 fused_ordering(604) 00:18:47.269 fused_ordering(605) 00:18:47.269 fused_ordering(606) 00:18:47.269 fused_ordering(607) 00:18:47.269 fused_ordering(608) 00:18:47.269 fused_ordering(609) 00:18:47.269 fused_ordering(610) 00:18:47.269 fused_ordering(611) 00:18:47.269 fused_ordering(612) 00:18:47.269 fused_ordering(613) 00:18:47.269 fused_ordering(614) 00:18:47.269 fused_ordering(615) 00:18:47.551 fused_ordering(616) 00:18:47.551 fused_ordering(617) 00:18:47.551 fused_ordering(618) 00:18:47.551 fused_ordering(619) 00:18:47.551 fused_ordering(620) 00:18:47.551 fused_ordering(621) 00:18:47.551 fused_ordering(622) 00:18:47.551 fused_ordering(623) 00:18:47.551 fused_ordering(624) 00:18:47.551 fused_ordering(625) 00:18:47.551 fused_ordering(626) 00:18:47.551 fused_ordering(627) 00:18:47.551 fused_ordering(628) 00:18:47.551 fused_ordering(629) 00:18:47.551 fused_ordering(630) 00:18:47.551 fused_ordering(631) 00:18:47.551 fused_ordering(632) 00:18:47.551 fused_ordering(633) 00:18:47.551 fused_ordering(634) 00:18:47.551 fused_ordering(635) 00:18:47.551 fused_ordering(636) 00:18:47.551 fused_ordering(637) 00:18:47.551 fused_ordering(638) 00:18:47.551 fused_ordering(639) 00:18:47.551 fused_ordering(640) 00:18:47.551 fused_ordering(641) 00:18:47.551 fused_ordering(642) 00:18:47.551 fused_ordering(643) 00:18:47.551 fused_ordering(644) 00:18:47.551 fused_ordering(645) 00:18:47.551 fused_ordering(646) 00:18:47.551 fused_ordering(647) 00:18:47.551 fused_ordering(648) 00:18:47.551 fused_ordering(649) 00:18:47.551 fused_ordering(650) 00:18:47.551 fused_ordering(651) 00:18:47.551 fused_ordering(652) 00:18:47.551 fused_ordering(653) 00:18:47.551 fused_ordering(654) 00:18:47.551 fused_ordering(655) 00:18:47.551 fused_ordering(656) 00:18:47.551 fused_ordering(657) 00:18:47.551 fused_ordering(658) 00:18:47.551 fused_ordering(659) 00:18:47.551 fused_ordering(660) 00:18:47.551 fused_ordering(661) 00:18:47.551 fused_ordering(662) 00:18:47.551 fused_ordering(663) 00:18:47.551 fused_ordering(664) 00:18:47.551 fused_ordering(665) 00:18:47.551 fused_ordering(666) 00:18:47.551 fused_ordering(667) 00:18:47.551 fused_ordering(668) 00:18:47.551 fused_ordering(669) 00:18:47.551 fused_ordering(670) 00:18:47.551 fused_ordering(671) 00:18:47.551 fused_ordering(672) 00:18:47.551 fused_ordering(673) 00:18:47.551 fused_ordering(674) 00:18:47.551 fused_ordering(675) 00:18:47.551 fused_ordering(676) 00:18:47.551 fused_ordering(677) 00:18:47.551 fused_ordering(678) 00:18:47.551 fused_ordering(679) 00:18:47.551 fused_ordering(680) 00:18:47.551 fused_ordering(681) 00:18:47.551 fused_ordering(682) 00:18:47.551 fused_ordering(683) 00:18:47.551 fused_ordering(684) 00:18:47.551 fused_ordering(685) 00:18:47.551 fused_ordering(686) 00:18:47.551 fused_ordering(687) 00:18:47.551 fused_ordering(688) 00:18:47.551 fused_ordering(689) 00:18:47.551 fused_ordering(690) 00:18:47.551 fused_ordering(691) 00:18:47.551 fused_ordering(692) 00:18:47.551 fused_ordering(693) 00:18:47.551 fused_ordering(694) 00:18:47.551 fused_ordering(695) 00:18:47.551 fused_ordering(696) 00:18:47.551 fused_ordering(697) 00:18:47.551 fused_ordering(698) 00:18:47.551 fused_ordering(699) 00:18:47.551 fused_ordering(700) 00:18:47.551 fused_ordering(701) 00:18:47.551 fused_ordering(702) 00:18:47.551 fused_ordering(703) 00:18:47.551 fused_ordering(704) 00:18:47.551 fused_ordering(705) 00:18:47.551 fused_ordering(706) 00:18:47.552 fused_ordering(707) 00:18:47.552 fused_ordering(708) 00:18:47.552 fused_ordering(709) 00:18:47.552 fused_ordering(710) 00:18:47.552 fused_ordering(711) 00:18:47.552 fused_ordering(712) 00:18:47.552 fused_ordering(713) 00:18:47.552 fused_ordering(714) 00:18:47.552 fused_ordering(715) 00:18:47.552 fused_ordering(716) 00:18:47.552 fused_ordering(717) 00:18:47.552 fused_ordering(718) 00:18:47.552 fused_ordering(719) 00:18:47.552 fused_ordering(720) 00:18:47.552 fused_ordering(721) 00:18:47.552 fused_ordering(722) 00:18:47.552 fused_ordering(723) 00:18:47.552 fused_ordering(724) 00:18:47.552 fused_ordering(725) 00:18:47.552 fused_ordering(726) 00:18:47.552 fused_ordering(727) 00:18:47.552 fused_ordering(728) 00:18:47.552 fused_ordering(729) 00:18:47.552 fused_ordering(730) 00:18:47.552 fused_ordering(731) 00:18:47.552 fused_ordering(732) 00:18:47.552 fused_ordering(733) 00:18:47.552 fused_ordering(734) 00:18:47.552 fused_ordering(735) 00:18:47.552 fused_ordering(736) 00:18:47.552 fused_ordering(737) 00:18:47.552 fused_ordering(738) 00:18:47.552 fused_ordering(739) 00:18:47.552 fused_ordering(740) 00:18:47.552 fused_ordering(741) 00:18:47.552 fused_ordering(742) 00:18:47.552 fused_ordering(743) 00:18:47.552 fused_ordering(744) 00:18:47.552 fused_ordering(745) 00:18:47.552 fused_ordering(746) 00:18:47.552 fused_ordering(747) 00:18:47.552 fused_ordering(748) 00:18:47.552 fused_ordering(749) 00:18:47.552 fused_ordering(750) 00:18:47.552 fused_ordering(751) 00:18:47.552 fused_ordering(752) 00:18:47.552 fused_ordering(753) 00:18:47.552 fused_ordering(754) 00:18:47.552 fused_ordering(755) 00:18:47.552 fused_ordering(756) 00:18:47.552 fused_ordering(757) 00:18:47.552 fused_ordering(758) 00:18:47.552 fused_ordering(759) 00:18:47.552 fused_ordering(760) 00:18:47.552 fused_ordering(761) 00:18:47.552 fused_ordering(762) 00:18:47.552 fused_ordering(763) 00:18:47.552 fused_ordering(764) 00:18:47.552 fused_ordering(765) 00:18:47.552 fused_ordering(766) 00:18:47.552 fused_ordering(767) 00:18:47.552 fused_ordering(768) 00:18:47.552 fused_ordering(769) 00:18:47.552 fused_ordering(770) 00:18:47.552 fused_ordering(771) 00:18:47.552 fused_ordering(772) 00:18:47.552 fused_ordering(773) 00:18:47.552 fused_ordering(774) 00:18:47.552 fused_ordering(775) 00:18:47.552 fused_ordering(776) 00:18:47.552 fused_ordering(777) 00:18:47.552 fused_ordering(778) 00:18:47.552 fused_ordering(779) 00:18:47.552 fused_ordering(780) 00:18:47.552 fused_ordering(781) 00:18:47.552 fused_ordering(782) 00:18:47.552 fused_ordering(783) 00:18:47.552 fused_ordering(784) 00:18:47.552 fused_ordering(785) 00:18:47.552 fused_ordering(786) 00:18:47.552 fused_ordering(787) 00:18:47.552 fused_ordering(788) 00:18:47.552 fused_ordering(789) 00:18:47.552 fused_ordering(790) 00:18:47.552 fused_ordering(791) 00:18:47.552 fused_ordering(792) 00:18:47.552 fused_ordering(793) 00:18:47.552 fused_ordering(794) 00:18:47.552 fused_ordering(795) 00:18:47.552 fused_ordering(796) 00:18:47.552 fused_ordering(797) 00:18:47.552 fused_ordering(798) 00:18:47.552 fused_ordering(799) 00:18:47.552 fused_ordering(800) 00:18:47.552 fused_ordering(801) 00:18:47.552 fused_ordering(802) 00:18:47.552 fused_ordering(803) 00:18:47.552 fused_ordering(804) 00:18:47.552 fused_ordering(805) 00:18:47.552 fused_ordering(806) 00:18:47.552 fused_ordering(807) 00:18:47.552 fused_ordering(808) 00:18:47.552 fused_ordering(809) 00:18:47.552 fused_ordering(810) 00:18:47.552 fused_ordering(811) 00:18:47.552 fused_ordering(812) 00:18:47.552 fused_ordering(813) 00:18:47.552 fused_ordering(814) 00:18:47.552 fused_ordering(815) 00:18:47.552 fused_ordering(816) 00:18:47.552 fused_ordering(817) 00:18:47.552 fused_ordering(818) 00:18:47.552 fused_ordering(819) 00:18:47.552 fused_ordering(820) 00:18:48.138 fused_ordering(821) 00:18:48.138 fused_ordering(822) 00:18:48.138 fused_ordering(823) 00:18:48.138 fused_ordering(824) 00:18:48.138 fused_ordering(825) 00:18:48.138 fused_ordering(826) 00:18:48.138 fused_ordering(827) 00:18:48.138 fused_ordering(828) 00:18:48.138 fused_ordering(829) 00:18:48.138 fused_ordering(830) 00:18:48.138 fused_ordering(831) 00:18:48.138 fused_ordering(832) 00:18:48.138 fused_ordering(833) 00:18:48.138 fused_ordering(834) 00:18:48.138 fused_ordering(835) 00:18:48.138 fused_ordering(836) 00:18:48.138 fused_ordering(837) 00:18:48.138 fused_ordering(838) 00:18:48.138 fused_ordering(839) 00:18:48.138 fused_ordering(840) 00:18:48.138 fused_ordering(841) 00:18:48.138 fused_ordering(842) 00:18:48.138 fused_ordering(843) 00:18:48.138 fused_ordering(844) 00:18:48.138 fused_ordering(845) 00:18:48.138 fused_ordering(846) 00:18:48.138 fused_ordering(847) 00:18:48.138 fused_ordering(848) 00:18:48.138 fused_ordering(849) 00:18:48.138 fused_ordering(850) 00:18:48.138 fused_ordering(851) 00:18:48.138 fused_ordering(852) 00:18:48.138 fused_ordering(853) 00:18:48.138 fused_ordering(854) 00:18:48.138 fused_ordering(855) 00:18:48.138 fused_ordering(856) 00:18:48.138 fused_ordering(857) 00:18:48.138 fused_ordering(858) 00:18:48.138 fused_ordering(859) 00:18:48.138 fused_ordering(860) 00:18:48.138 fused_ordering(861) 00:18:48.138 fused_ordering(862) 00:18:48.138 fused_ordering(863) 00:18:48.138 fused_ordering(864) 00:18:48.138 fused_ordering(865) 00:18:48.138 fused_ordering(866) 00:18:48.138 fused_ordering(867) 00:18:48.138 fused_ordering(868) 00:18:48.138 fused_ordering(869) 00:18:48.138 fused_ordering(870) 00:18:48.138 fused_ordering(871) 00:18:48.138 fused_ordering(872) 00:18:48.138 fused_ordering(873) 00:18:48.138 fused_ordering(874) 00:18:48.138 fused_ordering(875) 00:18:48.138 fused_ordering(876) 00:18:48.138 fused_ordering(877) 00:18:48.138 fused_ordering(878) 00:18:48.138 fused_ordering(879) 00:18:48.138 fused_ordering(880) 00:18:48.138 fused_ordering(881) 00:18:48.138 fused_ordering(882) 00:18:48.138 fused_ordering(883) 00:18:48.138 fused_ordering(884) 00:18:48.138 fused_ordering(885) 00:18:48.138 fused_ordering(886) 00:18:48.138 fused_ordering(887) 00:18:48.138 fused_ordering(888) 00:18:48.138 fused_ordering(889) 00:18:48.138 fused_ordering(890) 00:18:48.138 fused_ordering(891) 00:18:48.138 fused_ordering(892) 00:18:48.138 fused_ordering(893) 00:18:48.138 fused_ordering(894) 00:18:48.138 fused_ordering(895) 00:18:48.138 fused_ordering(896) 00:18:48.138 fused_ordering(897) 00:18:48.138 fused_ordering(898) 00:18:48.138 fused_ordering(899) 00:18:48.138 fused_ordering(900) 00:18:48.138 fused_ordering(901) 00:18:48.138 fused_ordering(902) 00:18:48.138 fused_ordering(903) 00:18:48.138 fused_ordering(904) 00:18:48.138 fused_ordering(905) 00:18:48.138 fused_ordering(906) 00:18:48.138 fused_ordering(907) 00:18:48.138 fused_ordering(908) 00:18:48.138 fused_ordering(909) 00:18:48.138 fused_ordering(910) 00:18:48.138 fused_ordering(911) 00:18:48.138 fused_ordering(912) 00:18:48.138 fused_ordering(913) 00:18:48.138 fused_ordering(914) 00:18:48.138 fused_ordering(915) 00:18:48.138 fused_ordering(916) 00:18:48.138 fused_ordering(917) 00:18:48.138 fused_ordering(918) 00:18:48.138 fused_ordering(919) 00:18:48.138 fused_ordering(920) 00:18:48.138 fused_ordering(921) 00:18:48.138 fused_ordering(922) 00:18:48.138 fused_ordering(923) 00:18:48.138 fused_ordering(924) 00:18:48.138 fused_ordering(925) 00:18:48.138 fused_ordering(926) 00:18:48.138 fused_ordering(927) 00:18:48.138 fused_ordering(928) 00:18:48.138 fused_ordering(929) 00:18:48.138 fused_ordering(930) 00:18:48.138 fused_ordering(931) 00:18:48.138 fused_ordering(932) 00:18:48.138 fused_ordering(933) 00:18:48.138 fused_ordering(934) 00:18:48.138 fused_ordering(935) 00:18:48.138 fused_ordering(936) 00:18:48.138 fused_ordering(937) 00:18:48.138 fused_ordering(938) 00:18:48.138 fused_ordering(939) 00:18:48.138 fused_ordering(940) 00:18:48.138 fused_ordering(941) 00:18:48.138 fused_ordering(942) 00:18:48.138 fused_ordering(943) 00:18:48.138 fused_ordering(944) 00:18:48.138 fused_ordering(945) 00:18:48.138 fused_ordering(946) 00:18:48.138 fused_ordering(947) 00:18:48.138 fused_ordering(948) 00:18:48.138 fused_ordering(949) 00:18:48.138 fused_ordering(950) 00:18:48.138 fused_ordering(951) 00:18:48.138 fused_ordering(952) 00:18:48.138 fused_ordering(953) 00:18:48.138 fused_ordering(954) 00:18:48.138 fused_ordering(955) 00:18:48.138 fused_ordering(956) 00:18:48.138 fused_ordering(957) 00:18:48.138 fused_ordering(958) 00:18:48.138 fused_ordering(959) 00:18:48.138 fused_ordering(960) 00:18:48.138 fused_ordering(961) 00:18:48.138 fused_ordering(962) 00:18:48.138 fused_ordering(963) 00:18:48.138 fused_ordering(964) 00:18:48.138 fused_ordering(965) 00:18:48.138 fused_ordering(966) 00:18:48.138 fused_ordering(967) 00:18:48.138 fused_ordering(968) 00:18:48.138 fused_ordering(969) 00:18:48.138 fused_ordering(970) 00:18:48.138 fused_ordering(971) 00:18:48.138 fused_ordering(972) 00:18:48.138 fused_ordering(973) 00:18:48.138 fused_ordering(974) 00:18:48.138 fused_ordering(975) 00:18:48.138 fused_ordering(976) 00:18:48.138 fused_ordering(977) 00:18:48.138 fused_ordering(978) 00:18:48.138 fused_ordering(979) 00:18:48.138 fused_ordering(980) 00:18:48.138 fused_ordering(981) 00:18:48.138 fused_ordering(982) 00:18:48.138 fused_ordering(983) 00:18:48.138 fused_ordering(984) 00:18:48.138 fused_ordering(985) 00:18:48.138 fused_ordering(986) 00:18:48.138 fused_ordering(987) 00:18:48.138 fused_ordering(988) 00:18:48.138 fused_ordering(989) 00:18:48.138 fused_ordering(990) 00:18:48.138 fused_ordering(991) 00:18:48.138 fused_ordering(992) 00:18:48.138 fused_ordering(993) 00:18:48.138 fused_ordering(994) 00:18:48.138 fused_ordering(995) 00:18:48.138 fused_ordering(996) 00:18:48.138 fused_ordering(997) 00:18:48.138 fused_ordering(998) 00:18:48.138 fused_ordering(999) 00:18:48.138 fused_ordering(1000) 00:18:48.138 fused_ordering(1001) 00:18:48.138 fused_ordering(1002) 00:18:48.138 fused_ordering(1003) 00:18:48.138 fused_ordering(1004) 00:18:48.138 fused_ordering(1005) 00:18:48.138 fused_ordering(1006) 00:18:48.138 fused_ordering(1007) 00:18:48.138 fused_ordering(1008) 00:18:48.138 fused_ordering(1009) 00:18:48.138 fused_ordering(1010) 00:18:48.138 fused_ordering(1011) 00:18:48.138 fused_ordering(1012) 00:18:48.138 fused_ordering(1013) 00:18:48.138 fused_ordering(1014) 00:18:48.138 fused_ordering(1015) 00:18:48.138 fused_ordering(1016) 00:18:48.138 fused_ordering(1017) 00:18:48.138 fused_ordering(1018) 00:18:48.138 fused_ordering(1019) 00:18:48.138 fused_ordering(1020) 00:18:48.138 fused_ordering(1021) 00:18:48.138 fused_ordering(1022) 00:18:48.138 fused_ordering(1023) 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:48.138 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:48.139 rmmod nvme_tcp 00:18:48.139 rmmod nvme_fabrics 00:18:48.139 rmmod nvme_keyring 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 333810 ']' 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 333810 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 333810 ']' 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 333810 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 333810 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 333810' 00:18:48.139 killing process with pid 333810 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 333810 00:18:48.139 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 333810 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.408 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:50.427 00:18:50.427 real 0m10.520s 00:18:50.427 user 0m5.117s 00:18:50.427 sys 0m5.457s 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.427 ************************************ 00:18:50.427 END TEST nvmf_fused_ordering 00:18:50.427 ************************************ 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.427 ************************************ 00:18:50.427 START TEST nvmf_ns_masking 00:18:50.427 ************************************ 00:18:50.427 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:50.707 * Looking for test storage... 00:18:50.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.707 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:50.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.708 --rc genhtml_branch_coverage=1 00:18:50.708 --rc genhtml_function_coverage=1 00:18:50.708 --rc genhtml_legend=1 00:18:50.708 --rc geninfo_all_blocks=1 00:18:50.708 --rc geninfo_unexecuted_blocks=1 00:18:50.708 00:18:50.708 ' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:50.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.708 --rc genhtml_branch_coverage=1 00:18:50.708 --rc genhtml_function_coverage=1 00:18:50.708 --rc genhtml_legend=1 00:18:50.708 --rc geninfo_all_blocks=1 00:18:50.708 --rc geninfo_unexecuted_blocks=1 00:18:50.708 00:18:50.708 ' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:50.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.708 --rc genhtml_branch_coverage=1 00:18:50.708 --rc genhtml_function_coverage=1 00:18:50.708 --rc genhtml_legend=1 00:18:50.708 --rc geninfo_all_blocks=1 00:18:50.708 --rc geninfo_unexecuted_blocks=1 00:18:50.708 00:18:50.708 ' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:50.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.708 --rc genhtml_branch_coverage=1 00:18:50.708 --rc genhtml_function_coverage=1 00:18:50.708 --rc genhtml_legend=1 00:18:50.708 --rc geninfo_all_blocks=1 00:18:50.708 --rc geninfo_unexecuted_blocks=1 00:18:50.708 00:18:50.708 ' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fb69c094-4317-4bc0-bc08-59bcd593113c 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2e8b83d6-7879-4bbc-a6e0-df778cb26aba 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5d61c4ae-c81d-4dd8-9a36-1ca1787bd211 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:50.708 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:50.709 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:50.709 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:57.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:57.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:57.370 Found net devices under 0000:af:00.0: cvl_0_0 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:57.370 Found net devices under 0000:af:00.1: cvl_0_1 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.370 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:18:57.371 00:18:57.371 --- 10.0.0.2 ping statistics --- 00:18:57.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.371 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:18:57.371 00:18:57.371 --- 10.0.0.1 ping statistics --- 00:18:57.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.371 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=337756 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 337756 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 337756 ']' 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:57.371 [2024-12-16 12:40:22.765871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:57.371 [2024-12-16 12:40:22.765920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.371 [2024-12-16 12:40:22.837281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.371 [2024-12-16 12:40:22.877105] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.371 [2024-12-16 12:40:22.877156] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.371 [2024-12-16 12:40:22.877163] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.371 [2024-12-16 12:40:22.877170] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.371 [2024-12-16 12:40:22.877175] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.371 [2024-12-16 12:40:22.877195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.371 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:57.371 [2024-12-16 12:40:23.166778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.371 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:57.371 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:57.371 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:57.371 Malloc1 00:18:57.371 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:57.631 Malloc2 00:18:57.631 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:57.891 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:58.151 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.151 [2024-12-16 12:40:24.177388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.151 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:58.151 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5d61c4ae-c81d-4dd8-9a36-1ca1787bd211 -a 10.0.0.2 -s 4420 -i 4 00:18:58.410 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:58.410 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:58.410 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.410 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:58.410 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:00.316 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:00.576 [ 0]:0x1 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f705765a6054ed9bd2d02e701c1823b 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f705765a6054ed9bd2d02e701c1823b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:00.576 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:00.835 [ 0]:0x1 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f705765a6054ed9bd2d02e701c1823b 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f705765a6054ed9bd2d02e701c1823b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:00.835 [ 1]:0x2 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:00.835 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:01.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.094 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:01.094 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:01.355 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:01.355 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5d61c4ae-c81d-4dd8-9a36-1ca1787bd211 -a 10.0.0.2 -s 4420 -i 4 00:19:01.616 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:01.616 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:01.616 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.616 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:19:01.616 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:19:01.616 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:03.523 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:03.783 [ 0]:0x2 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.783 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.043 [ 0]:0x1 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f705765a6054ed9bd2d02e701c1823b 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f705765a6054ed9bd2d02e701c1823b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:04.043 [ 1]:0x2 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:04.043 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.302 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:04.562 [ 0]:0x2 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.562 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:04.821 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:04.821 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5d61c4ae-c81d-4dd8-9a36-1ca1787bd211 -a 10.0.0.2 -s 4420 -i 4 00:19:05.080 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:05.080 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:05.080 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.080 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:05.080 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:05.080 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:07.010 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:07.010 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:07.010 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.011 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:07.011 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.011 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:07.011 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:07.011 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.270 [ 0]:0x1 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f705765a6054ed9bd2d02e701c1823b 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f705765a6054ed9bd2d02e701c1823b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.270 [ 1]:0x2 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.270 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.530 [ 0]:0x2 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.530 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:07.790 [2024-12-16 12:40:33.780722] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:07.790 request: 00:19:07.790 { 00:19:07.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.790 "nsid": 2, 00:19:07.790 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.790 "method": "nvmf_ns_remove_host", 00:19:07.790 "req_id": 1 00:19:07.790 } 00:19:07.790 Got JSON-RPC error response 00:19:07.790 response: 00:19:07.790 { 00:19:07.790 "code": -32602, 00:19:07.790 "message": "Invalid parameters" 00:19:07.790 } 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.790 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.050 [ 0]:0x2 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=077506cdb87a4540aa6e79d01e543ee3 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 077506cdb87a4540aa6e79d01e543ee3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=339701 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 339701 /var/tmp/host.sock 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 339701 ']' 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:08.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.050 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:08.050 [2024-12-16 12:40:34.008756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:08.050 [2024-12-16 12:40:34.008802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339701 ] 00:19:08.050 [2024-12-16 12:40:34.074806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.050 [2024-12-16 12:40:34.113554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.310 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.310 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:08.310 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.569 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:08.828 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fb69c094-4317-4bc0-bc08-59bcd593113c 00:19:08.828 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:19:08.828 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FB69C09443174BC0BC0859BCD593113C -i 00:19:09.087 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2e8b83d6-7879-4bbc-a6e0-df778cb26aba 00:19:09.087 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:19:09.087 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2E8B83D678794BBCA6E0DF778CB26ABA -i 00:19:09.087 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:09.346 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:09.604 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:09.604 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:09.864 nvme0n1 00:19:09.864 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:09.864 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:10.124 nvme1n2 00:19:10.124 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:10.124 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:10.124 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:10.124 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:10.124 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:10.384 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:10.384 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:10.384 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:10.384 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:10.643 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fb69c094-4317-4bc0-bc08-59bcd593113c == \f\b\6\9\c\0\9\4\-\4\3\1\7\-\4\b\c\0\-\b\c\0\8\-\5\9\b\c\d\5\9\3\1\1\3\c ]] 00:19:10.643 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:10.643 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:10.643 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2e8b83d6-7879-4bbc-a6e0-df778cb26aba == \2\e\8\b\8\3\d\6\-\7\8\7\9\-\4\b\b\c\-\a\6\e\0\-\d\f\7\7\8\c\b\2\6\a\b\a ]] 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 339701 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 339701 ']' 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 339701 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 339701 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 339701' 00:19:10.902 killing process with pid 339701 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 339701 00:19:10.902 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 339701 00:19:11.162 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.421 rmmod nvme_tcp 00:19:11.421 rmmod nvme_fabrics 00:19:11.421 rmmod nvme_keyring 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 337756 ']' 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 337756 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 337756 ']' 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 337756 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.421 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 337756 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 337756' 00:19:11.681 killing process with pid 337756 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 337756 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 337756 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.681 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.222 00:19:14.222 real 0m23.366s 00:19:14.222 user 0m24.690s 00:19:14.222 sys 0m6.772s 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.222 ************************************ 00:19:14.222 END TEST nvmf_ns_masking 00:19:14.222 ************************************ 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.222 ************************************ 00:19:14.222 START TEST nvmf_nvme_cli 00:19:14.222 ************************************ 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:14.222 * Looking for test storage... 00:19:14.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.222 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:14.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.222 --rc genhtml_branch_coverage=1 00:19:14.222 --rc genhtml_function_coverage=1 00:19:14.222 --rc genhtml_legend=1 00:19:14.222 --rc geninfo_all_blocks=1 00:19:14.222 --rc geninfo_unexecuted_blocks=1 00:19:14.222 00:19:14.222 ' 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:14.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.222 --rc genhtml_branch_coverage=1 00:19:14.222 --rc genhtml_function_coverage=1 00:19:14.222 --rc genhtml_legend=1 00:19:14.222 --rc geninfo_all_blocks=1 00:19:14.222 --rc geninfo_unexecuted_blocks=1 00:19:14.222 00:19:14.222 ' 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:14.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.222 --rc genhtml_branch_coverage=1 00:19:14.222 --rc genhtml_function_coverage=1 00:19:14.222 --rc genhtml_legend=1 00:19:14.222 --rc geninfo_all_blocks=1 00:19:14.222 --rc geninfo_unexecuted_blocks=1 00:19:14.222 00:19:14.222 ' 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:14.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.222 --rc genhtml_branch_coverage=1 00:19:14.222 --rc genhtml_function_coverage=1 00:19:14.222 --rc genhtml_legend=1 00:19:14.222 --rc geninfo_all_blocks=1 00:19:14.222 --rc geninfo_unexecuted_blocks=1 00:19:14.222 00:19:14.222 ' 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:14.222 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:14.223 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.799 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.799 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.799 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.799 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.799 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.799 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:20.800 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:20.800 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:20.800 Found net devices under 0000:af:00.0: cvl_0_0 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:20.800 Found net devices under 0000:af:00.1: cvl_0_1 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:19:20.800 00:19:20.800 --- 10.0.0.2 ping statistics --- 00:19:20.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.800 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:19:20.800 00:19:20.800 --- 10.0.0.1 ping statistics --- 00:19:20.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.800 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:20.800 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=343650 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 343650 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 343650 ']' 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.801 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 [2024-12-16 12:40:46.047793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:20.801 [2024-12-16 12:40:46.047837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.801 [2024-12-16 12:40:46.121414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.801 [2024-12-16 12:40:46.163152] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.801 [2024-12-16 12:40:46.163191] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.801 [2024-12-16 12:40:46.163199] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.801 [2024-12-16 12:40:46.163204] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.801 [2024-12-16 12:40:46.163210] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.801 [2024-12-16 12:40:46.163256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.801 [2024-12-16 12:40:46.163367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.801 [2024-12-16 12:40:46.163474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.801 [2024-12-16 12:40:46.163475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 [2024-12-16 12:40:46.307435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 Malloc0 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 Malloc1 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 [2024-12-16 12:40:46.384429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:19:20.801 00:19:20.801 Discovery Log Number of Records 2, Generation counter 2 00:19:20.801 =====Discovery Log Entry 0====== 00:19:20.801 trtype: tcp 00:19:20.801 adrfam: ipv4 00:19:20.801 subtype: current discovery subsystem 00:19:20.801 treq: not required 00:19:20.801 portid: 0 00:19:20.801 trsvcid: 4420 00:19:20.801 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:20.801 traddr: 10.0.0.2 00:19:20.801 eflags: explicit discovery connections, duplicate discovery information 00:19:20.801 sectype: none 00:19:20.801 =====Discovery Log Entry 1====== 00:19:20.801 trtype: tcp 00:19:20.801 adrfam: ipv4 00:19:20.801 subtype: nvme subsystem 00:19:20.801 treq: not required 00:19:20.801 portid: 0 00:19:20.801 trsvcid: 4420 00:19:20.801 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:20.801 traddr: 10.0.0.2 00:19:20.801 eflags: none 00:19:20.801 sectype: none 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme1n1 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme1n2 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:19:20.801 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:21.739 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:21.739 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:21.739 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.739 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:21.739 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:21.739 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.274 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme1n1 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme1n2 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:24.275 /dev/nvme0n2 00:19:24.275 /dev/nvme1n1 00:19:24.275 /dev/nvme1n2 ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme1n1 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme1n2 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:24.275 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.275 rmmod nvme_tcp 00:19:24.275 rmmod nvme_fabrics 00:19:24.275 rmmod nvme_keyring 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 343650 ']' 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 343650 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 343650 ']' 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 343650 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 343650 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 343650' 00:19:24.275 killing process with pid 343650 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 343650 00:19:24.275 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 343650 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.535 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:26.441 00:19:26.441 real 0m12.582s 00:19:26.441 user 0m18.238s 00:19:26.441 sys 0m5.116s 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:26.441 ************************************ 00:19:26.441 END TEST nvmf_nvme_cli 00:19:26.441 ************************************ 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.441 ************************************ 00:19:26.441 START TEST nvmf_vfio_user 00:19:26.441 ************************************ 00:19:26.441 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:26.701 * Looking for test storage... 00:19:26.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:26.701 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.702 --rc genhtml_branch_coverage=1 00:19:26.702 --rc genhtml_function_coverage=1 00:19:26.702 --rc genhtml_legend=1 00:19:26.702 --rc geninfo_all_blocks=1 00:19:26.702 --rc geninfo_unexecuted_blocks=1 00:19:26.702 00:19:26.702 ' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.702 --rc genhtml_branch_coverage=1 00:19:26.702 --rc genhtml_function_coverage=1 00:19:26.702 --rc genhtml_legend=1 00:19:26.702 --rc geninfo_all_blocks=1 00:19:26.702 --rc geninfo_unexecuted_blocks=1 00:19:26.702 00:19:26.702 ' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.702 --rc genhtml_branch_coverage=1 00:19:26.702 --rc genhtml_function_coverage=1 00:19:26.702 --rc genhtml_legend=1 00:19:26.702 --rc geninfo_all_blocks=1 00:19:26.702 --rc geninfo_unexecuted_blocks=1 00:19:26.702 00:19:26.702 ' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.702 --rc genhtml_branch_coverage=1 00:19:26.702 --rc genhtml_function_coverage=1 00:19:26.702 --rc genhtml_legend=1 00:19:26.702 --rc geninfo_all_blocks=1 00:19:26.702 --rc geninfo_unexecuted_blocks=1 00:19:26.702 00:19:26.702 ' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=344894 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 344894' 00:19:26.702 Process pid: 344894 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 344894 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 344894 ']' 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.702 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.703 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.703 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.703 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:26.703 [2024-12-16 12:40:52.723306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:26.703 [2024-12-16 12:40:52.723354] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.962 [2024-12-16 12:40:52.792565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.962 [2024-12-16 12:40:52.832798] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.962 [2024-12-16 12:40:52.832836] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.962 [2024-12-16 12:40:52.832843] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.962 [2024-12-16 12:40:52.832849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.962 [2024-12-16 12:40:52.832855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.962 [2024-12-16 12:40:52.832903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.962 [2024-12-16 12:40:52.833015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.962 [2024-12-16 12:40:52.833138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.962 [2024-12-16 12:40:52.833152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.962 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.962 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:26.962 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:27.900 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:28.159 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:28.159 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:28.159 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:28.159 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:28.159 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:28.418 Malloc1 00:19:28.418 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:28.677 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:28.937 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:28.937 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:28.937 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:28.937 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:29.201 Malloc2 00:19:29.201 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:29.460 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:29.719 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:29.980 [2024-12-16 12:40:55.804767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:29.980 [2024-12-16 12:40:55.804814] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345360 ] 00:19:29.980 [2024-12-16 12:40:55.833413] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:29.980 [2024-12-16 12:40:55.837115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:29.980 [2024-12-16 12:40:55.837134] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa94d510000 00:19:29.980 [2024-12-16 12:40:55.838117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.839110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.840119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.841125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.842138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.843129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.844132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.845143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:29.980 [2024-12-16 12:40:55.846155] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:29.980 [2024-12-16 12:40:55.846165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa94c21a000 00:19:29.980 [2024-12-16 12:40:55.847077] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:29.980 [2024-12-16 12:40:55.859526] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:29.980 [2024-12-16 12:40:55.859549] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:29.980 [2024-12-16 12:40:55.864269] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:29.980 [2024-12-16 12:40:55.864304] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:29.980 [2024-12-16 12:40:55.864374] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:29.980 [2024-12-16 12:40:55.864392] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:29.980 [2024-12-16 12:40:55.864397] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:29.980 [2024-12-16 12:40:55.865269] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:29.980 [2024-12-16 12:40:55.865278] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:29.980 [2024-12-16 12:40:55.865284] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:29.980 [2024-12-16 12:40:55.866272] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:29.980 [2024-12-16 12:40:55.866283] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:29.980 [2024-12-16 12:40:55.866289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:29.980 [2024-12-16 12:40:55.867282] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:29.980 [2024-12-16 12:40:55.867289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:29.980 [2024-12-16 12:40:55.868287] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:29.980 [2024-12-16 12:40:55.868295] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:29.980 [2024-12-16 12:40:55.868299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:29.980 [2024-12-16 12:40:55.868305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:29.980 [2024-12-16 12:40:55.868410] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:29.980 [2024-12-16 12:40:55.868414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:29.980 [2024-12-16 12:40:55.868419] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:29.980 [2024-12-16 12:40:55.869292] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:29.980 [2024-12-16 12:40:55.870294] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:29.980 [2024-12-16 12:40:55.871303] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:29.980 [2024-12-16 12:40:55.872303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:29.980 [2024-12-16 12:40:55.872387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:29.980 [2024-12-16 12:40:55.873314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:29.980 [2024-12-16 12:40:55.873320] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:29.980 [2024-12-16 12:40:55.873324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:29.980 [2024-12-16 12:40:55.873341] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:29.980 [2024-12-16 12:40:55.873352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:29.980 [2024-12-16 12:40:55.873366] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:29.980 [2024-12-16 12:40:55.873371] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:29.980 [2024-12-16 12:40:55.873374] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.980 [2024-12-16 12:40:55.873387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:29.980 [2024-12-16 12:40:55.873431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:29.980 [2024-12-16 12:40:55.873440] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:29.980 [2024-12-16 12:40:55.873444] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:29.980 [2024-12-16 12:40:55.873448] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:29.980 [2024-12-16 12:40:55.873452] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:29.981 [2024-12-16 12:40:55.873457] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:29.981 [2024-12-16 12:40:55.873461] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:29.981 [2024-12-16 12:40:55.873465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873472] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.981 [2024-12-16 12:40:55.873507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.981 [2024-12-16 12:40:55.873514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.981 [2024-12-16 12:40:55.873522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.981 [2024-12-16 12:40:55.873526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873555] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:29.981 [2024-12-16 12:40:55.873559] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873657] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:29.981 [2024-12-16 12:40:55.873661] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:29.981 [2024-12-16 12:40:55.873664] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.981 [2024-12-16 12:40:55.873670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873691] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:29.981 [2024-12-16 12:40:55.873698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873711] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:29.981 [2024-12-16 12:40:55.873715] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:29.981 [2024-12-16 12:40:55.873718] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.981 [2024-12-16 12:40:55.873724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873763] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873769] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:29.981 [2024-12-16 12:40:55.873773] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:29.981 [2024-12-16 12:40:55.873776] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.981 [2024-12-16 12:40:55.873781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873823] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873833] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:29.981 [2024-12-16 12:40:55.873837] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:29.981 [2024-12-16 12:40:55.873841] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:29.981 [2024-12-16 12:40:55.873857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.873937] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:29.981 [2024-12-16 12:40:55.873941] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:29.981 [2024-12-16 12:40:55.873944] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:29.981 [2024-12-16 12:40:55.873947] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:29.981 [2024-12-16 12:40:55.873950] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:29.981 [2024-12-16 12:40:55.873956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:29.981 [2024-12-16 12:40:55.873962] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:29.981 [2024-12-16 12:40:55.873966] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:29.981 [2024-12-16 12:40:55.873969] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.981 [2024-12-16 12:40:55.873974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873980] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:29.981 [2024-12-16 12:40:55.873984] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:29.981 [2024-12-16 12:40:55.873987] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.981 [2024-12-16 12:40:55.873992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.873999] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:29.981 [2024-12-16 12:40:55.874004] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:29.981 [2024-12-16 12:40:55.874007] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:29.981 [2024-12-16 12:40:55.874013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:29.981 [2024-12-16 12:40:55.874019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.874029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.874039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:29.981 [2024-12-16 12:40:55.874045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:29.981 ===================================================== 00:19:29.981 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:29.981 ===================================================== 00:19:29.981 Controller Capabilities/Features 00:19:29.981 ================================ 00:19:29.981 Vendor ID: 4e58 00:19:29.981 Subsystem Vendor ID: 4e58 00:19:29.981 Serial Number: SPDK1 00:19:29.982 Model Number: SPDK bdev Controller 00:19:29.982 Firmware Version: 24.09.1 00:19:29.982 Recommended Arb Burst: 6 00:19:29.982 IEEE OUI Identifier: 8d 6b 50 00:19:29.982 Multi-path I/O 00:19:29.982 May have multiple subsystem ports: Yes 00:19:29.982 May have multiple controllers: Yes 00:19:29.982 Associated with SR-IOV VF: No 00:19:29.982 Max Data Transfer Size: 131072 00:19:29.982 Max Number of Namespaces: 32 00:19:29.982 Max Number of I/O Queues: 127 00:19:29.982 NVMe Specification Version (VS): 1.3 00:19:29.982 NVMe Specification Version (Identify): 1.3 00:19:29.982 Maximum Queue Entries: 256 00:19:29.982 Contiguous Queues Required: Yes 00:19:29.982 Arbitration Mechanisms Supported 00:19:29.982 Weighted Round Robin: Not Supported 00:19:29.982 Vendor Specific: Not Supported 00:19:29.982 Reset Timeout: 15000 ms 00:19:29.982 Doorbell Stride: 4 bytes 00:19:29.982 NVM Subsystem Reset: Not Supported 00:19:29.982 Command Sets Supported 00:19:29.982 NVM Command Set: Supported 00:19:29.982 Boot Partition: Not Supported 00:19:29.982 Memory Page Size Minimum: 4096 bytes 00:19:29.982 Memory Page Size Maximum: 4096 bytes 00:19:29.982 Persistent Memory Region: Not Supported 00:19:29.982 Optional Asynchronous Events Supported 00:19:29.982 Namespace Attribute Notices: Supported 00:19:29.982 Firmware Activation Notices: Not Supported 00:19:29.982 ANA Change Notices: Not Supported 00:19:29.982 PLE Aggregate Log Change Notices: Not Supported 00:19:29.982 LBA Status Info Alert Notices: Not Supported 00:19:29.982 EGE Aggregate Log Change Notices: Not Supported 00:19:29.982 Normal NVM Subsystem Shutdown event: Not Supported 00:19:29.982 Zone Descriptor Change Notices: Not Supported 00:19:29.982 Discovery Log Change Notices: Not Supported 00:19:29.982 Controller Attributes 00:19:29.982 128-bit Host Identifier: Supported 00:19:29.982 Non-Operational Permissive Mode: Not Supported 00:19:29.982 NVM Sets: Not Supported 00:19:29.982 Read Recovery Levels: Not Supported 00:19:29.982 Endurance Groups: Not Supported 00:19:29.982 Predictable Latency Mode: Not Supported 00:19:29.982 Traffic Based Keep ALive: Not Supported 00:19:29.982 Namespace Granularity: Not Supported 00:19:29.982 SQ Associations: Not Supported 00:19:29.982 UUID List: Not Supported 00:19:29.982 Multi-Domain Subsystem: Not Supported 00:19:29.982 Fixed Capacity Management: Not Supported 00:19:29.982 Variable Capacity Management: Not Supported 00:19:29.982 Delete Endurance Group: Not Supported 00:19:29.982 Delete NVM Set: Not Supported 00:19:29.982 Extended LBA Formats Supported: Not Supported 00:19:29.982 Flexible Data Placement Supported: Not Supported 00:19:29.982 00:19:29.982 Controller Memory Buffer Support 00:19:29.982 ================================ 00:19:29.982 Supported: No 00:19:29.982 00:19:29.982 Persistent Memory Region Support 00:19:29.982 ================================ 00:19:29.982 Supported: No 00:19:29.982 00:19:29.982 Admin Command Set Attributes 00:19:29.982 ============================ 00:19:29.982 Security Send/Receive: Not Supported 00:19:29.982 Format NVM: Not Supported 00:19:29.982 Firmware Activate/Download: Not Supported 00:19:29.982 Namespace Management: Not Supported 00:19:29.982 Device Self-Test: Not Supported 00:19:29.982 Directives: Not Supported 00:19:29.982 NVMe-MI: Not Supported 00:19:29.982 Virtualization Management: Not Supported 00:19:29.982 Doorbell Buffer Config: Not Supported 00:19:29.982 Get LBA Status Capability: Not Supported 00:19:29.982 Command & Feature Lockdown Capability: Not Supported 00:19:29.982 Abort Command Limit: 4 00:19:29.982 Async Event Request Limit: 4 00:19:29.982 Number of Firmware Slots: N/A 00:19:29.982 Firmware Slot 1 Read-Only: N/A 00:19:29.982 Firmware Activation Without Reset: N/A 00:19:29.982 Multiple Update Detection Support: N/A 00:19:29.982 Firmware Update Granularity: No Information Provided 00:19:29.982 Per-Namespace SMART Log: No 00:19:29.982 Asymmetric Namespace Access Log Page: Not Supported 00:19:29.982 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:29.982 Command Effects Log Page: Supported 00:19:29.982 Get Log Page Extended Data: Supported 00:19:29.982 Telemetry Log Pages: Not Supported 00:19:29.982 Persistent Event Log Pages: Not Supported 00:19:29.982 Supported Log Pages Log Page: May Support 00:19:29.982 Commands Supported & Effects Log Page: Not Supported 00:19:29.982 Feature Identifiers & Effects Log Page:May Support 00:19:29.982 NVMe-MI Commands & Effects Log Page: May Support 00:19:29.982 Data Area 4 for Telemetry Log: Not Supported 00:19:29.982 Error Log Page Entries Supported: 128 00:19:29.982 Keep Alive: Supported 00:19:29.982 Keep Alive Granularity: 10000 ms 00:19:29.982 00:19:29.982 NVM Command Set Attributes 00:19:29.982 ========================== 00:19:29.982 Submission Queue Entry Size 00:19:29.982 Max: 64 00:19:29.982 Min: 64 00:19:29.982 Completion Queue Entry Size 00:19:29.982 Max: 16 00:19:29.982 Min: 16 00:19:29.982 Number of Namespaces: 32 00:19:29.982 Compare Command: Supported 00:19:29.982 Write Uncorrectable Command: Not Supported 00:19:29.982 Dataset Management Command: Supported 00:19:29.982 Write Zeroes Command: Supported 00:19:29.982 Set Features Save Field: Not Supported 00:19:29.982 Reservations: Not Supported 00:19:29.982 Timestamp: Not Supported 00:19:29.982 Copy: Supported 00:19:29.982 Volatile Write Cache: Present 00:19:29.982 Atomic Write Unit (Normal): 1 00:19:29.982 Atomic Write Unit (PFail): 1 00:19:29.982 Atomic Compare & Write Unit: 1 00:19:29.982 Fused Compare & Write: Supported 00:19:29.982 Scatter-Gather List 00:19:29.982 SGL Command Set: Supported (Dword aligned) 00:19:29.982 SGL Keyed: Not Supported 00:19:29.982 SGL Bit Bucket Descriptor: Not Supported 00:19:29.982 SGL Metadata Pointer: Not Supported 00:19:29.982 Oversized SGL: Not Supported 00:19:29.982 SGL Metadata Address: Not Supported 00:19:29.982 SGL Offset: Not Supported 00:19:29.982 Transport SGL Data Block: Not Supported 00:19:29.982 Replay Protected Memory Block: Not Supported 00:19:29.982 00:19:29.982 Firmware Slot Information 00:19:29.982 ========================= 00:19:29.982 Active slot: 1 00:19:29.982 Slot 1 Firmware Revision: 24.09.1 00:19:29.982 00:19:29.982 00:19:29.982 Commands Supported and Effects 00:19:29.982 ============================== 00:19:29.982 Admin Commands 00:19:29.982 -------------- 00:19:29.982 Get Log Page (02h): Supported 00:19:29.982 Identify (06h): Supported 00:19:29.982 Abort (08h): Supported 00:19:29.982 Set Features (09h): Supported 00:19:29.982 Get Features (0Ah): Supported 00:19:29.982 Asynchronous Event Request (0Ch): Supported 00:19:29.982 Keep Alive (18h): Supported 00:19:29.982 I/O Commands 00:19:29.982 ------------ 00:19:29.982 Flush (00h): Supported LBA-Change 00:19:29.982 Write (01h): Supported LBA-Change 00:19:29.982 Read (02h): Supported 00:19:29.982 Compare (05h): Supported 00:19:29.982 Write Zeroes (08h): Supported LBA-Change 00:19:29.982 Dataset Management (09h): Supported LBA-Change 00:19:29.982 Copy (19h): Supported LBA-Change 00:19:29.982 00:19:29.982 Error Log 00:19:29.982 ========= 00:19:29.982 00:19:29.982 Arbitration 00:19:29.982 =========== 00:19:29.982 Arbitration Burst: 1 00:19:29.982 00:19:29.982 Power Management 00:19:29.982 ================ 00:19:29.982 Number of Power States: 1 00:19:29.982 Current Power State: Power State #0 00:19:29.982 Power State #0: 00:19:29.982 Max Power: 0.00 W 00:19:29.982 Non-Operational State: Operational 00:19:29.982 Entry Latency: Not Reported 00:19:29.982 Exit Latency: Not Reported 00:19:29.982 Relative Read Throughput: 0 00:19:29.982 Relative Read Latency: 0 00:19:29.982 Relative Write Throughput: 0 00:19:29.982 Relative Write Latency: 0 00:19:29.982 Idle Power: Not Reported 00:19:29.982 Active Power: Not Reported 00:19:29.982 Non-Operational Permissive Mode: Not Supported 00:19:29.982 00:19:29.982 Health Information 00:19:29.982 ================== 00:19:29.982 Critical Warnings: 00:19:29.982 Available Spare Space: OK 00:19:29.982 Temperature: OK 00:19:29.982 Device Reliability: OK 00:19:29.982 Read Only: No 00:19:29.982 Volatile Memory Backup: OK 00:19:29.982 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:29.982 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:29.982 Available Spare: 0% 00:19:29.982 Availabl[2024-12-16 12:40:55.874138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:29.982 [2024-12-16 12:40:55.874145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:29.982 [2024-12-16 12:40:55.874171] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:29.982 [2024-12-16 12:40:55.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.982 [2024-12-16 12:40:55.874186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.982 [2024-12-16 12:40:55.874191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.982 [2024-12-16 12:40:55.874197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.983 [2024-12-16 12:40:55.874322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:29.983 [2024-12-16 12:40:55.874333] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:29.983 [2024-12-16 12:40:55.875318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:29.983 [2024-12-16 12:40:55.875365] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:29.983 [2024-12-16 12:40:55.875371] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:29.983 [2024-12-16 12:40:55.876328] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:29.983 [2024-12-16 12:40:55.876337] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:29.983 [2024-12-16 12:40:55.876391] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:29.983 [2024-12-16 12:40:55.877351] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:29.983 e Spare Threshold: 0% 00:19:29.983 Life Percentage Used: 0% 00:19:29.983 Data Units Read: 0 00:19:29.983 Data Units Written: 0 00:19:29.983 Host Read Commands: 0 00:19:29.983 Host Write Commands: 0 00:19:29.983 Controller Busy Time: 0 minutes 00:19:29.983 Power Cycles: 0 00:19:29.983 Power On Hours: 0 hours 00:19:29.983 Unsafe Shutdowns: 0 00:19:29.983 Unrecoverable Media Errors: 0 00:19:29.983 Lifetime Error Log Entries: 0 00:19:29.983 Warning Temperature Time: 0 minutes 00:19:29.983 Critical Temperature Time: 0 minutes 00:19:29.983 00:19:29.983 Number of Queues 00:19:29.983 ================ 00:19:29.983 Number of I/O Submission Queues: 127 00:19:29.983 Number of I/O Completion Queues: 127 00:19:29.983 00:19:29.983 Active Namespaces 00:19:29.983 ================= 00:19:29.983 Namespace ID:1 00:19:29.983 Error Recovery Timeout: Unlimited 00:19:29.983 Command Set Identifier: NVM (00h) 00:19:29.983 Deallocate: Supported 00:19:29.983 Deallocated/Unwritten Error: Not Supported 00:19:29.983 Deallocated Read Value: Unknown 00:19:29.983 Deallocate in Write Zeroes: Not Supported 00:19:29.983 Deallocated Guard Field: 0xFFFF 00:19:29.983 Flush: Supported 00:19:29.983 Reservation: Supported 00:19:29.983 Namespace Sharing Capabilities: Multiple Controllers 00:19:29.983 Size (in LBAs): 131072 (0GiB) 00:19:29.983 Capacity (in LBAs): 131072 (0GiB) 00:19:29.983 Utilization (in LBAs): 131072 (0GiB) 00:19:29.983 NGUID: 6E6B4C4A28AF490D8A0D375F863E0A8C 00:19:29.983 UUID: 6e6b4c4a-28af-490d-8a0d-375f863e0a8c 00:19:29.983 Thin Provisioning: Not Supported 00:19:29.983 Per-NS Atomic Units: Yes 00:19:29.983 Atomic Boundary Size (Normal): 0 00:19:29.983 Atomic Boundary Size (PFail): 0 00:19:29.983 Atomic Boundary Offset: 0 00:19:29.983 Maximum Single Source Range Length: 65535 00:19:29.983 Maximum Copy Length: 65535 00:19:29.983 Maximum Source Range Count: 1 00:19:29.983 NGUID/EUI64 Never Reused: No 00:19:29.983 Namespace Write Protected: No 00:19:29.983 Number of LBA Formats: 1 00:19:29.983 Current LBA Format: LBA Format #00 00:19:29.983 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:29.983 00:19:29.983 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:30.242 [2024-12-16 12:40:56.094267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:35.518 Initializing NVMe Controllers 00:19:35.518 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:35.518 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:35.518 Initialization complete. Launching workers. 00:19:35.518 ======================================================== 00:19:35.518 Latency(us) 00:19:35.518 Device Information : IOPS MiB/s Average min max 00:19:35.518 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39949.00 156.05 3204.61 929.47 7640.03 00:19:35.518 ======================================================== 00:19:35.518 Total : 39949.00 156.05 3204.61 929.47 7640.03 00:19:35.518 00:19:35.518 [2024-12-16 12:41:01.116071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:35.518 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:35.518 [2024-12-16 12:41:01.337104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.793 Initializing NVMe Controllers 00:19:40.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:40.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:40.793 Initialization complete. Launching workers. 00:19:40.793 ======================================================== 00:19:40.793 Latency(us) 00:19:40.793 Device Information : IOPS MiB/s Average min max 00:19:40.793 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16032.70 62.63 7983.04 6010.54 15513.44 00:19:40.793 ======================================================== 00:19:40.793 Total : 16032.70 62.63 7983.04 6010.54 15513.44 00:19:40.793 00:19:40.793 [2024-12-16 12:41:06.372935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.793 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:40.793 [2024-12-16 12:41:06.565847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:46.069 [2024-12-16 12:41:11.634390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:46.069 Initializing NVMe Controllers 00:19:46.069 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:46.069 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:46.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:46.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:46.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:46.069 Initialization complete. Launching workers. 00:19:46.069 Starting thread on core 2 00:19:46.069 Starting thread on core 3 00:19:46.069 Starting thread on core 1 00:19:46.069 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:46.069 [2024-12-16 12:41:11.907351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:49.360 [2024-12-16 12:41:14.970458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:49.360 Initializing NVMe Controllers 00:19:49.360 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:49.360 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:49.360 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:49.360 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:49.360 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:49.360 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:49.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:49.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:49.360 Initialization complete. Launching workers. 00:19:49.360 Starting thread on core 1 with urgent priority queue 00:19:49.360 Starting thread on core 2 with urgent priority queue 00:19:49.360 Starting thread on core 3 with urgent priority queue 00:19:49.360 Starting thread on core 0 with urgent priority queue 00:19:49.360 SPDK bdev Controller (SPDK1 ) core 0: 8079.67 IO/s 12.38 secs/100000 ios 00:19:49.360 SPDK bdev Controller (SPDK1 ) core 1: 8352.33 IO/s 11.97 secs/100000 ios 00:19:49.360 SPDK bdev Controller (SPDK1 ) core 2: 6362.33 IO/s 15.72 secs/100000 ios 00:19:49.360 SPDK bdev Controller (SPDK1 ) core 3: 7875.33 IO/s 12.70 secs/100000 ios 00:19:49.360 ======================================================== 00:19:49.360 00:19:49.360 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:49.360 [2024-12-16 12:41:15.239258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:49.361 Initializing NVMe Controllers 00:19:49.361 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:49.361 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:49.361 Namespace ID: 1 size: 0GB 00:19:49.361 Initialization complete. 00:19:49.361 INFO: using host memory buffer for IO 00:19:49.361 Hello world! 00:19:49.361 [2024-12-16 12:41:15.274484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:49.361 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:49.621 [2024-12-16 12:41:15.536319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:50.559 Initializing NVMe Controllers 00:19:50.559 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:50.559 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:50.559 Initialization complete. Launching workers. 00:19:50.559 submit (in ns) avg, min, max = 6968.9, 3181.0, 3998950.5 00:19:50.559 complete (in ns) avg, min, max = 18748.2, 1720.0, 3997977.1 00:19:50.559 00:19:50.559 Submit histogram 00:19:50.559 ================ 00:19:50.559 Range in us Cumulative Count 00:19:50.559 3.170 - 3.185: 0.0060% ( 1) 00:19:50.559 3.185 - 3.200: 0.0120% ( 1) 00:19:50.559 3.200 - 3.215: 0.1076% ( 16) 00:19:50.559 3.215 - 3.230: 0.8251% ( 120) 00:19:50.559 3.230 - 3.246: 3.8625% ( 508) 00:19:50.559 3.246 - 3.261: 9.3513% ( 918) 00:19:50.559 3.261 - 3.276: 14.9058% ( 929) 00:19:50.559 3.276 - 3.291: 21.2078% ( 1054) 00:19:50.559 3.291 - 3.307: 28.3468% ( 1194) 00:19:50.559 3.307 - 3.322: 34.7504% ( 1071) 00:19:50.559 3.322 - 3.337: 40.8430% ( 1019) 00:19:50.559 3.337 - 3.352: 46.8281% ( 1001) 00:19:50.559 3.352 - 3.368: 52.5142% ( 951) 00:19:50.559 3.368 - 3.383: 57.8296% ( 889) 00:19:50.559 3.383 - 3.398: 65.2138% ( 1235) 00:19:50.559 3.398 - 3.413: 71.1031% ( 985) 00:19:50.559 3.413 - 3.429: 76.4484% ( 894) 00:19:50.559 3.429 - 3.444: 81.2676% ( 806) 00:19:50.559 3.444 - 3.459: 84.4604% ( 534) 00:19:50.559 3.459 - 3.474: 86.6368% ( 364) 00:19:50.559 3.474 - 3.490: 87.6413% ( 168) 00:19:50.559 3.490 - 3.505: 88.0897% ( 75) 00:19:50.559 3.505 - 3.520: 88.4365% ( 58) 00:19:50.559 3.520 - 3.535: 88.8909% ( 76) 00:19:50.559 3.535 - 3.550: 89.4589% ( 95) 00:19:50.559 3.550 - 3.566: 90.2242% ( 128) 00:19:50.559 3.566 - 3.581: 91.2407% ( 170) 00:19:50.559 3.581 - 3.596: 92.2451% ( 168) 00:19:50.559 3.596 - 3.611: 93.0105% ( 128) 00:19:50.559 3.611 - 3.627: 93.9253% ( 153) 00:19:50.559 3.627 - 3.642: 94.9477% ( 171) 00:19:50.559 3.642 - 3.657: 95.9761% ( 172) 00:19:50.559 3.657 - 3.672: 96.7534% ( 130) 00:19:50.559 3.672 - 3.688: 97.3692% ( 103) 00:19:50.559 3.688 - 3.703: 97.9970% ( 105) 00:19:50.559 3.703 - 3.718: 98.4873% ( 82) 00:19:50.559 3.718 - 3.733: 98.7982% ( 52) 00:19:50.559 3.733 - 3.749: 99.0613% ( 44) 00:19:50.559 3.749 - 3.764: 99.2586% ( 33) 00:19:50.559 3.764 - 3.779: 99.4200% ( 27) 00:19:50.559 3.779 - 3.794: 99.4978% ( 13) 00:19:50.559 3.794 - 3.810: 99.5635% ( 11) 00:19:50.559 3.810 - 3.825: 99.6054% ( 7) 00:19:50.559 3.825 - 3.840: 99.6114% ( 1) 00:19:50.559 3.840 - 3.855: 99.6173% ( 1) 00:19:50.559 3.855 - 3.870: 99.6233% ( 1) 00:19:50.559 3.870 - 3.886: 99.6293% ( 1) 00:19:50.559 3.886 - 3.901: 99.6353% ( 1) 00:19:50.559 3.901 - 3.931: 99.6413% ( 1) 00:19:50.559 4.663 - 4.693: 99.6472% ( 1) 00:19:50.559 4.785 - 4.815: 99.6532% ( 1) 00:19:50.559 4.846 - 4.876: 99.6592% ( 1) 00:19:50.559 4.907 - 4.937: 99.6652% ( 1) 00:19:50.559 4.937 - 4.968: 99.6712% ( 1) 00:19:50.559 4.968 - 4.998: 99.6771% ( 1) 00:19:50.559 4.998 - 5.029: 99.6831% ( 1) 00:19:50.559 5.059 - 5.090: 99.6951% ( 2) 00:19:50.559 5.150 - 5.181: 99.7010% ( 1) 00:19:50.559 5.181 - 5.211: 99.7070% ( 1) 00:19:50.559 5.303 - 5.333: 99.7130% ( 1) 00:19:50.559 5.364 - 5.394: 99.7190% ( 1) 00:19:50.559 5.394 - 5.425: 99.7309% ( 2) 00:19:50.559 5.425 - 5.455: 99.7429% ( 2) 00:19:50.559 5.455 - 5.486: 99.7489% ( 1) 00:19:50.559 5.486 - 5.516: 99.7608% ( 2) 00:19:50.559 5.577 - 5.608: 99.7668% ( 1) 00:19:50.559 5.638 - 5.669: 99.7728% ( 1) 00:19:50.559 5.882 - 5.912: 99.7848% ( 2) 00:19:50.559 5.943 - 5.973: 99.7907% ( 1) 00:19:50.559 6.095 - 6.126: 99.7967% ( 1) 00:19:50.559 6.156 - 6.187: 99.8087% ( 2) 00:19:50.559 6.187 - 6.217: 99.8146% ( 1) 00:19:50.559 6.217 - 6.248: 99.8206% ( 1) 00:19:50.559 6.248 - 6.278: 99.8326% ( 2) 00:19:50.559 6.461 - 6.491: 99.8386% ( 1) 00:19:50.559 6.522 - 6.552: 99.8445% ( 1) 00:19:50.559 6.583 - 6.613: 99.8505% ( 1) 00:19:50.559 6.613 - 6.644: 99.8565% ( 1) 00:19:50.559 6.827 - 6.857: 99.8625% ( 1) 00:19:50.559 [2024-12-16 12:41:16.560566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:50.559 6.857 - 6.888: 99.8685% ( 1) 00:19:50.559 6.918 - 6.949: 99.8744% ( 1) 00:19:50.559 7.314 - 7.345: 99.8804% ( 1) 00:19:50.559 7.375 - 7.406: 99.8864% ( 1) 00:19:50.559 7.497 - 7.528: 99.8924% ( 1) 00:19:50.559 7.528 - 7.558: 99.8984% ( 1) 00:19:50.559 7.802 - 7.863: 99.9043% ( 1) 00:19:50.559 10.789 - 10.850: 99.9103% ( 1) 00:19:50.559 3994.575 - 4025.783: 100.0000% ( 15) 00:19:50.559 00:19:50.559 Complete histogram 00:19:50.559 ================== 00:19:50.559 Range in us Cumulative Count 00:19:50.559 1.714 - 1.722: 0.0120% ( 2) 00:19:50.559 1.730 - 1.737: 0.0179% ( 1) 00:19:50.559 1.737 - 1.745: 0.0239% ( 1) 00:19:50.559 1.745 - 1.752: 0.0359% ( 2) 00:19:50.559 1.752 - 1.760: 0.2631% ( 38) 00:19:50.559 1.760 - 1.768: 2.3019% ( 341) 00:19:50.559 1.768 - 1.775: 8.2750% ( 999) 00:19:50.559 1.775 - 1.783: 13.8894% ( 939) 00:19:50.559 1.783 - 1.790: 16.8789% ( 500) 00:19:50.559 1.790 - 1.798: 18.1166% ( 207) 00:19:50.559 1.798 - 1.806: 18.7444% ( 105) 00:19:50.559 1.806 - 1.813: 19.5336% ( 132) 00:19:50.559 1.813 - 1.821: 25.1241% ( 935) 00:19:50.559 1.821 - 1.829: 43.4798% ( 3070) 00:19:50.559 1.829 - 1.836: 69.6442% ( 4376) 00:19:50.559 1.836 - 1.844: 86.7205% ( 2856) 00:19:50.559 1.844 - 1.851: 93.1540% ( 1076) 00:19:50.559 1.851 - 1.859: 95.5516% ( 401) 00:19:50.559 1.859 - 1.867: 96.6577% ( 185) 00:19:50.559 1.867 - 1.874: 97.3812% ( 121) 00:19:50.559 1.874 - 1.882: 97.7339% ( 59) 00:19:50.559 1.882 - 1.890: 97.9312% ( 33) 00:19:50.559 1.890 - 1.897: 98.2302% ( 50) 00:19:50.559 1.897 - 1.905: 98.5531% ( 54) 00:19:50.559 1.905 - 1.912: 98.8401% ( 48) 00:19:50.560 1.912 - 1.920: 99.0553% ( 36) 00:19:50.560 1.920 - 1.928: 99.1689% ( 19) 00:19:50.560 1.928 - 1.935: 99.2407% ( 12) 00:19:50.560 1.935 - 1.943: 99.2765% ( 6) 00:19:50.560 1.943 - 1.950: 99.3184% ( 7) 00:19:50.560 1.950 - 1.966: 99.3901% ( 12) 00:19:50.560 1.966 - 1.981: 99.4081% ( 3) 00:19:50.560 1.996 - 2.011: 99.4141% ( 1) 00:19:50.560 2.011 - 2.027: 99.4200% ( 1) 00:19:50.560 2.042 - 2.057: 99.4320% ( 2) 00:19:50.560 3.413 - 3.429: 99.4380% ( 1) 00:19:50.560 3.581 - 3.596: 99.4439% ( 1) 00:19:50.560 3.611 - 3.627: 99.4499% ( 1) 00:19:50.560 3.688 - 3.703: 99.4559% ( 1) 00:19:50.560 3.794 - 3.810: 99.4619% ( 1) 00:19:50.560 4.175 - 4.206: 99.4679% ( 1) 00:19:50.560 4.541 - 4.571: 99.4738% ( 1) 00:19:50.560 4.693 - 4.724: 99.4798% ( 1) 00:19:50.560 4.876 - 4.907: 99.4858% ( 1) 00:19:50.560 4.907 - 4.937: 99.4918% ( 1) 00:19:50.560 4.968 - 4.998: 99.4978% ( 1) 00:19:50.560 4.998 - 5.029: 99.5037% ( 1) 00:19:50.560 5.090 - 5.120: 99.5097% ( 1) 00:19:50.560 5.120 - 5.150: 99.5157% ( 1) 00:19:50.560 5.425 - 5.455: 99.5217% ( 1) 00:19:50.560 5.699 - 5.730: 99.5277% ( 1) 00:19:50.560 5.912 - 5.943: 99.5336% ( 1) 00:19:50.560 6.217 - 6.248: 99.5396% ( 1) 00:19:50.560 7.192 - 7.223: 99.5456% ( 1) 00:19:50.560 8.290 - 8.350: 99.5516% ( 1) 00:19:50.560 17.189 - 17.310: 99.5575% ( 1) 00:19:50.560 17.676 - 17.798: 99.5635% ( 1) 00:19:50.560 44.861 - 45.105: 99.5695% ( 1) 00:19:50.560 936.229 - 940.130: 99.5755% ( 1) 00:19:50.560 2402.987 - 2418.590: 99.5815% ( 1) 00:19:50.560 3978.971 - 3994.575: 99.5994% ( 3) 00:19:50.560 3994.575 - 4025.783: 100.0000% ( 67) 00:19:50.560 00:19:50.560 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:50.560 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:50.560 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:50.560 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:50.560 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:50.819 [ 00:19:50.819 { 00:19:50.819 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:50.819 "subtype": "Discovery", 00:19:50.819 "listen_addresses": [], 00:19:50.819 "allow_any_host": true, 00:19:50.819 "hosts": [] 00:19:50.819 }, 00:19:50.819 { 00:19:50.819 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:50.819 "subtype": "NVMe", 00:19:50.819 "listen_addresses": [ 00:19:50.819 { 00:19:50.819 "trtype": "VFIOUSER", 00:19:50.819 "adrfam": "IPv4", 00:19:50.819 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:50.819 "trsvcid": "0" 00:19:50.819 } 00:19:50.819 ], 00:19:50.819 "allow_any_host": true, 00:19:50.819 "hosts": [], 00:19:50.819 "serial_number": "SPDK1", 00:19:50.819 "model_number": "SPDK bdev Controller", 00:19:50.819 "max_namespaces": 32, 00:19:50.819 "min_cntlid": 1, 00:19:50.819 "max_cntlid": 65519, 00:19:50.819 "namespaces": [ 00:19:50.819 { 00:19:50.819 "nsid": 1, 00:19:50.819 "bdev_name": "Malloc1", 00:19:50.819 "name": "Malloc1", 00:19:50.819 "nguid": "6E6B4C4A28AF490D8A0D375F863E0A8C", 00:19:50.819 "uuid": "6e6b4c4a-28af-490d-8a0d-375f863e0a8c" 00:19:50.819 } 00:19:50.819 ] 00:19:50.819 }, 00:19:50.819 { 00:19:50.819 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:50.819 "subtype": "NVMe", 00:19:50.819 "listen_addresses": [ 00:19:50.819 { 00:19:50.819 "trtype": "VFIOUSER", 00:19:50.819 "adrfam": "IPv4", 00:19:50.819 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:50.819 "trsvcid": "0" 00:19:50.819 } 00:19:50.819 ], 00:19:50.820 "allow_any_host": true, 00:19:50.820 "hosts": [], 00:19:50.820 "serial_number": "SPDK2", 00:19:50.820 "model_number": "SPDK bdev Controller", 00:19:50.820 "max_namespaces": 32, 00:19:50.820 "min_cntlid": 1, 00:19:50.820 "max_cntlid": 65519, 00:19:50.820 "namespaces": [ 00:19:50.820 { 00:19:50.820 "nsid": 1, 00:19:50.820 "bdev_name": "Malloc2", 00:19:50.820 "name": "Malloc2", 00:19:50.820 "nguid": "8E010C25D6D74DA085DCB698CE76A1BA", 00:19:50.820 "uuid": "8e010c25-d6d7-4da0-85dc-b698ce76a1ba" 00:19:50.820 } 00:19:50.820 ] 00:19:50.820 } 00:19:50.820 ] 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=348707 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:19:50.820 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:51.079 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.079 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:51.079 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:19:51.079 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:51.079 [2024-12-16 12:41:16.948502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:51.079 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.079 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.079 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:51.079 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:51.079 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:51.338 Malloc3 00:19:51.338 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:51.597 [2024-12-16 12:41:17.407919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:51.597 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:51.597 Asynchronous Event Request test 00:19:51.597 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:51.597 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:51.597 Registering asynchronous event callbacks... 00:19:51.597 Starting namespace attribute notice tests for all controllers... 00:19:51.597 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:51.597 aer_cb - Changed Namespace 00:19:51.597 Cleaning up... 00:19:51.597 [ 00:19:51.597 { 00:19:51.597 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.597 "subtype": "Discovery", 00:19:51.597 "listen_addresses": [], 00:19:51.597 "allow_any_host": true, 00:19:51.597 "hosts": [] 00:19:51.597 }, 00:19:51.597 { 00:19:51.597 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:51.597 "subtype": "NVMe", 00:19:51.597 "listen_addresses": [ 00:19:51.597 { 00:19:51.597 "trtype": "VFIOUSER", 00:19:51.597 "adrfam": "IPv4", 00:19:51.597 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:51.597 "trsvcid": "0" 00:19:51.597 } 00:19:51.597 ], 00:19:51.597 "allow_any_host": true, 00:19:51.597 "hosts": [], 00:19:51.597 "serial_number": "SPDK1", 00:19:51.597 "model_number": "SPDK bdev Controller", 00:19:51.597 "max_namespaces": 32, 00:19:51.597 "min_cntlid": 1, 00:19:51.597 "max_cntlid": 65519, 00:19:51.597 "namespaces": [ 00:19:51.597 { 00:19:51.597 "nsid": 1, 00:19:51.597 "bdev_name": "Malloc1", 00:19:51.597 "name": "Malloc1", 00:19:51.597 "nguid": "6E6B4C4A28AF490D8A0D375F863E0A8C", 00:19:51.597 "uuid": "6e6b4c4a-28af-490d-8a0d-375f863e0a8c" 00:19:51.597 }, 00:19:51.597 { 00:19:51.597 "nsid": 2, 00:19:51.597 "bdev_name": "Malloc3", 00:19:51.597 "name": "Malloc3", 00:19:51.597 "nguid": "530063F34FEC48559176A978ADBEF5AF", 00:19:51.597 "uuid": "530063f3-4fec-4855-9176-a978adbef5af" 00:19:51.597 } 00:19:51.597 ] 00:19:51.597 }, 00:19:51.597 { 00:19:51.597 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:51.597 "subtype": "NVMe", 00:19:51.597 "listen_addresses": [ 00:19:51.597 { 00:19:51.597 "trtype": "VFIOUSER", 00:19:51.597 "adrfam": "IPv4", 00:19:51.597 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:51.597 "trsvcid": "0" 00:19:51.597 } 00:19:51.597 ], 00:19:51.597 "allow_any_host": true, 00:19:51.597 "hosts": [], 00:19:51.597 "serial_number": "SPDK2", 00:19:51.597 "model_number": "SPDK bdev Controller", 00:19:51.597 "max_namespaces": 32, 00:19:51.597 "min_cntlid": 1, 00:19:51.597 "max_cntlid": 65519, 00:19:51.597 "namespaces": [ 00:19:51.597 { 00:19:51.597 "nsid": 1, 00:19:51.597 "bdev_name": "Malloc2", 00:19:51.597 "name": "Malloc2", 00:19:51.597 "nguid": "8E010C25D6D74DA085DCB698CE76A1BA", 00:19:51.597 "uuid": "8e010c25-d6d7-4da0-85dc-b698ce76a1ba" 00:19:51.597 } 00:19:51.597 ] 00:19:51.597 } 00:19:51.597 ] 00:19:51.597 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 348707 00:19:51.597 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:51.597 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:51.597 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:51.598 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:51.598 [2024-12-16 12:41:17.642429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:51.598 [2024-12-16 12:41:17.642452] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348932 ] 00:19:51.859 [2024-12-16 12:41:17.668267] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:51.859 [2024-12-16 12:41:17.671992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:51.859 [2024-12-16 12:41:17.672012] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3305dcf000 00:19:51.859 [2024-12-16 12:41:17.672994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.674008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.675020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.676031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.677041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.678043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.679053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.680063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.859 [2024-12-16 12:41:17.681068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:51.859 [2024-12-16 12:41:17.681078] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3304ad9000 00:19:51.859 [2024-12-16 12:41:17.681991] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:51.859 [2024-12-16 12:41:17.693348] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:51.859 [2024-12-16 12:41:17.693371] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:51.859 [2024-12-16 12:41:17.698447] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:51.859 [2024-12-16 12:41:17.698481] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:51.859 [2024-12-16 12:41:17.698547] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:51.859 [2024-12-16 12:41:17.698563] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:51.859 [2024-12-16 12:41:17.698568] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:51.859 [2024-12-16 12:41:17.699460] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:51.859 [2024-12-16 12:41:17.699469] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:51.859 [2024-12-16 12:41:17.699475] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:51.859 [2024-12-16 12:41:17.700468] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:51.859 [2024-12-16 12:41:17.700479] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:51.859 [2024-12-16 12:41:17.700486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:51.859 [2024-12-16 12:41:17.701477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:51.859 [2024-12-16 12:41:17.701486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:51.859 [2024-12-16 12:41:17.702480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:51.859 [2024-12-16 12:41:17.702488] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:51.859 [2024-12-16 12:41:17.702493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:51.859 [2024-12-16 12:41:17.702498] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:51.859 [2024-12-16 12:41:17.702603] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:51.859 [2024-12-16 12:41:17.702607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:51.859 [2024-12-16 12:41:17.702612] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:51.859 [2024-12-16 12:41:17.703487] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:51.859 [2024-12-16 12:41:17.704491] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:51.859 [2024-12-16 12:41:17.705495] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:51.859 [2024-12-16 12:41:17.706494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:51.859 [2024-12-16 12:41:17.706530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:51.859 [2024-12-16 12:41:17.707507] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:51.859 [2024-12-16 12:41:17.707515] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:51.859 [2024-12-16 12:41:17.707519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.707536] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:51.859 [2024-12-16 12:41:17.707546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.707557] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:51.859 [2024-12-16 12:41:17.707562] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:51.859 [2024-12-16 12:41:17.707565] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.859 [2024-12-16 12:41:17.707576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:51.859 [2024-12-16 12:41:17.714120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:51.859 [2024-12-16 12:41:17.714131] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:51.859 [2024-12-16 12:41:17.714136] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:51.859 [2024-12-16 12:41:17.714140] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:51.859 [2024-12-16 12:41:17.714144] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:51.859 [2024-12-16 12:41:17.714148] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:51.859 [2024-12-16 12:41:17.714152] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:51.859 [2024-12-16 12:41:17.714156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.714163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.714172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:51.859 [2024-12-16 12:41:17.722121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:51.859 [2024-12-16 12:41:17.722132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.859 [2024-12-16 12:41:17.722139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.859 [2024-12-16 12:41:17.722147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.859 [2024-12-16 12:41:17.722154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.859 [2024-12-16 12:41:17.722158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.722167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.722175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:51.859 [2024-12-16 12:41:17.730120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:51.859 [2024-12-16 12:41:17.730128] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:51.859 [2024-12-16 12:41:17.730133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:51.859 [2024-12-16 12:41:17.730138] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.730146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.730154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.738117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.738172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.738180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.738186] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:51.860 [2024-12-16 12:41:17.738191] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:51.860 [2024-12-16 12:41:17.738194] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.860 [2024-12-16 12:41:17.738200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.746117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.746127] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:51.860 [2024-12-16 12:41:17.746135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.746142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.746148] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:51.860 [2024-12-16 12:41:17.746152] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:51.860 [2024-12-16 12:41:17.746155] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.860 [2024-12-16 12:41:17.746160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.754117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.754131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.754138] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.754144] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:51.860 [2024-12-16 12:41:17.754148] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:51.860 [2024-12-16 12:41:17.754151] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.860 [2024-12-16 12:41:17.754157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.762118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.762127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762161] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:51.860 [2024-12-16 12:41:17.762165] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:51.860 [2024-12-16 12:41:17.762170] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:51.860 [2024-12-16 12:41:17.762185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.770118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.770131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.778119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.778131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.786118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.786130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.794118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.794134] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:51.860 [2024-12-16 12:41:17.794139] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:51.860 [2024-12-16 12:41:17.794142] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:51.860 [2024-12-16 12:41:17.794146] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:51.860 [2024-12-16 12:41:17.794149] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:51.860 [2024-12-16 12:41:17.794154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:51.860 [2024-12-16 12:41:17.794161] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:51.860 [2024-12-16 12:41:17.794165] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:51.860 [2024-12-16 12:41:17.794168] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.860 [2024-12-16 12:41:17.794173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.794179] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:51.860 [2024-12-16 12:41:17.794183] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:51.860 [2024-12-16 12:41:17.794186] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.860 [2024-12-16 12:41:17.794192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.794198] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:51.860 [2024-12-16 12:41:17.794204] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:51.860 [2024-12-16 12:41:17.794207] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.860 [2024-12-16 12:41:17.794213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:51.860 [2024-12-16 12:41:17.802118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.802131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:51.860 [2024-12-16 12:41:17.802146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:51.860 ===================================================== 00:19:51.860 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:51.860 ===================================================== 00:19:51.860 Controller Capabilities/Features 00:19:51.860 ================================ 00:19:51.860 Vendor ID: 4e58 00:19:51.860 Subsystem Vendor ID: 4e58 00:19:51.860 Serial Number: SPDK2 00:19:51.860 Model Number: SPDK bdev Controller 00:19:51.860 Firmware Version: 24.09.1 00:19:51.860 Recommended Arb Burst: 6 00:19:51.860 IEEE OUI Identifier: 8d 6b 50 00:19:51.860 Multi-path I/O 00:19:51.860 May have multiple subsystem ports: Yes 00:19:51.860 May have multiple controllers: Yes 00:19:51.860 Associated with SR-IOV VF: No 00:19:51.860 Max Data Transfer Size: 131072 00:19:51.860 Max Number of Namespaces: 32 00:19:51.860 Max Number of I/O Queues: 127 00:19:51.860 NVMe Specification Version (VS): 1.3 00:19:51.860 NVMe Specification Version (Identify): 1.3 00:19:51.860 Maximum Queue Entries: 256 00:19:51.860 Contiguous Queues Required: Yes 00:19:51.860 Arbitration Mechanisms Supported 00:19:51.860 Weighted Round Robin: Not Supported 00:19:51.860 Vendor Specific: Not Supported 00:19:51.860 Reset Timeout: 15000 ms 00:19:51.860 Doorbell Stride: 4 bytes 00:19:51.860 NVM Subsystem Reset: Not Supported 00:19:51.860 Command Sets Supported 00:19:51.860 NVM Command Set: Supported 00:19:51.860 Boot Partition: Not Supported 00:19:51.860 Memory Page Size Minimum: 4096 bytes 00:19:51.860 Memory Page Size Maximum: 4096 bytes 00:19:51.860 Persistent Memory Region: Not Supported 00:19:51.860 Optional Asynchronous Events Supported 00:19:51.860 Namespace Attribute Notices: Supported 00:19:51.860 Firmware Activation Notices: Not Supported 00:19:51.860 ANA Change Notices: Not Supported 00:19:51.861 PLE Aggregate Log Change Notices: Not Supported 00:19:51.861 LBA Status Info Alert Notices: Not Supported 00:19:51.861 EGE Aggregate Log Change Notices: Not Supported 00:19:51.861 Normal NVM Subsystem Shutdown event: Not Supported 00:19:51.861 Zone Descriptor Change Notices: Not Supported 00:19:51.861 Discovery Log Change Notices: Not Supported 00:19:51.861 Controller Attributes 00:19:51.861 128-bit Host Identifier: Supported 00:19:51.861 Non-Operational Permissive Mode: Not Supported 00:19:51.861 NVM Sets: Not Supported 00:19:51.861 Read Recovery Levels: Not Supported 00:19:51.861 Endurance Groups: Not Supported 00:19:51.861 Predictable Latency Mode: Not Supported 00:19:51.861 Traffic Based Keep ALive: Not Supported 00:19:51.861 Namespace Granularity: Not Supported 00:19:51.861 SQ Associations: Not Supported 00:19:51.861 UUID List: Not Supported 00:19:51.861 Multi-Domain Subsystem: Not Supported 00:19:51.861 Fixed Capacity Management: Not Supported 00:19:51.861 Variable Capacity Management: Not Supported 00:19:51.861 Delete Endurance Group: Not Supported 00:19:51.861 Delete NVM Set: Not Supported 00:19:51.861 Extended LBA Formats Supported: Not Supported 00:19:51.861 Flexible Data Placement Supported: Not Supported 00:19:51.861 00:19:51.861 Controller Memory Buffer Support 00:19:51.861 ================================ 00:19:51.861 Supported: No 00:19:51.861 00:19:51.861 Persistent Memory Region Support 00:19:51.861 ================================ 00:19:51.861 Supported: No 00:19:51.861 00:19:51.861 Admin Command Set Attributes 00:19:51.861 ============================ 00:19:51.861 Security Send/Receive: Not Supported 00:19:51.861 Format NVM: Not Supported 00:19:51.861 Firmware Activate/Download: Not Supported 00:19:51.861 Namespace Management: Not Supported 00:19:51.861 Device Self-Test: Not Supported 00:19:51.861 Directives: Not Supported 00:19:51.861 NVMe-MI: Not Supported 00:19:51.861 Virtualization Management: Not Supported 00:19:51.861 Doorbell Buffer Config: Not Supported 00:19:51.861 Get LBA Status Capability: Not Supported 00:19:51.861 Command & Feature Lockdown Capability: Not Supported 00:19:51.861 Abort Command Limit: 4 00:19:51.861 Async Event Request Limit: 4 00:19:51.861 Number of Firmware Slots: N/A 00:19:51.861 Firmware Slot 1 Read-Only: N/A 00:19:51.861 Firmware Activation Without Reset: N/A 00:19:51.861 Multiple Update Detection Support: N/A 00:19:51.861 Firmware Update Granularity: No Information Provided 00:19:51.861 Per-Namespace SMART Log: No 00:19:51.861 Asymmetric Namespace Access Log Page: Not Supported 00:19:51.861 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:51.861 Command Effects Log Page: Supported 00:19:51.861 Get Log Page Extended Data: Supported 00:19:51.861 Telemetry Log Pages: Not Supported 00:19:51.861 Persistent Event Log Pages: Not Supported 00:19:51.861 Supported Log Pages Log Page: May Support 00:19:51.861 Commands Supported & Effects Log Page: Not Supported 00:19:51.861 Feature Identifiers & Effects Log Page:May Support 00:19:51.861 NVMe-MI Commands & Effects Log Page: May Support 00:19:51.861 Data Area 4 for Telemetry Log: Not Supported 00:19:51.861 Error Log Page Entries Supported: 128 00:19:51.861 Keep Alive: Supported 00:19:51.861 Keep Alive Granularity: 10000 ms 00:19:51.861 00:19:51.861 NVM Command Set Attributes 00:19:51.861 ========================== 00:19:51.861 Submission Queue Entry Size 00:19:51.861 Max: 64 00:19:51.861 Min: 64 00:19:51.861 Completion Queue Entry Size 00:19:51.861 Max: 16 00:19:51.861 Min: 16 00:19:51.861 Number of Namespaces: 32 00:19:51.861 Compare Command: Supported 00:19:51.861 Write Uncorrectable Command: Not Supported 00:19:51.861 Dataset Management Command: Supported 00:19:51.861 Write Zeroes Command: Supported 00:19:51.861 Set Features Save Field: Not Supported 00:19:51.861 Reservations: Not Supported 00:19:51.861 Timestamp: Not Supported 00:19:51.861 Copy: Supported 00:19:51.861 Volatile Write Cache: Present 00:19:51.861 Atomic Write Unit (Normal): 1 00:19:51.861 Atomic Write Unit (PFail): 1 00:19:51.861 Atomic Compare & Write Unit: 1 00:19:51.861 Fused Compare & Write: Supported 00:19:51.861 Scatter-Gather List 00:19:51.861 SGL Command Set: Supported (Dword aligned) 00:19:51.861 SGL Keyed: Not Supported 00:19:51.861 SGL Bit Bucket Descriptor: Not Supported 00:19:51.861 SGL Metadata Pointer: Not Supported 00:19:51.861 Oversized SGL: Not Supported 00:19:51.861 SGL Metadata Address: Not Supported 00:19:51.861 SGL Offset: Not Supported 00:19:51.861 Transport SGL Data Block: Not Supported 00:19:51.861 Replay Protected Memory Block: Not Supported 00:19:51.861 00:19:51.861 Firmware Slot Information 00:19:51.861 ========================= 00:19:51.861 Active slot: 1 00:19:51.861 Slot 1 Firmware Revision: 24.09.1 00:19:51.861 00:19:51.861 00:19:51.861 Commands Supported and Effects 00:19:51.861 ============================== 00:19:51.861 Admin Commands 00:19:51.861 -------------- 00:19:51.861 Get Log Page (02h): Supported 00:19:51.861 Identify (06h): Supported 00:19:51.861 Abort (08h): Supported 00:19:51.861 Set Features (09h): Supported 00:19:51.861 Get Features (0Ah): Supported 00:19:51.861 Asynchronous Event Request (0Ch): Supported 00:19:51.861 Keep Alive (18h): Supported 00:19:51.861 I/O Commands 00:19:51.861 ------------ 00:19:51.861 Flush (00h): Supported LBA-Change 00:19:51.861 Write (01h): Supported LBA-Change 00:19:51.861 Read (02h): Supported 00:19:51.861 Compare (05h): Supported 00:19:51.861 Write Zeroes (08h): Supported LBA-Change 00:19:51.861 Dataset Management (09h): Supported LBA-Change 00:19:51.861 Copy (19h): Supported LBA-Change 00:19:51.861 00:19:51.861 Error Log 00:19:51.861 ========= 00:19:51.861 00:19:51.861 Arbitration 00:19:51.861 =========== 00:19:51.861 Arbitration Burst: 1 00:19:51.861 00:19:51.861 Power Management 00:19:51.861 ================ 00:19:51.861 Number of Power States: 1 00:19:51.861 Current Power State: Power State #0 00:19:51.861 Power State #0: 00:19:51.861 Max Power: 0.00 W 00:19:51.861 Non-Operational State: Operational 00:19:51.861 Entry Latency: Not Reported 00:19:51.861 Exit Latency: Not Reported 00:19:51.861 Relative Read Throughput: 0 00:19:51.861 Relative Read Latency: 0 00:19:51.861 Relative Write Throughput: 0 00:19:51.861 Relative Write Latency: 0 00:19:51.861 Idle Power: Not Reported 00:19:51.861 Active Power: Not Reported 00:19:51.861 Non-Operational Permissive Mode: Not Supported 00:19:51.861 00:19:51.861 Health Information 00:19:51.861 ================== 00:19:51.861 Critical Warnings: 00:19:51.861 Available Spare Space: OK 00:19:51.861 Temperature: OK 00:19:51.861 Device Reliability: OK 00:19:51.861 Read Only: No 00:19:51.861 Volatile Memory Backup: OK 00:19:51.861 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:51.861 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:51.861 Available Spare: 0% 00:19:51.861 Availabl[2024-12-16 12:41:17.802232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:51.861 [2024-12-16 12:41:17.810117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:51.861 [2024-12-16 12:41:17.810145] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:51.861 [2024-12-16 12:41:17.810153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.861 [2024-12-16 12:41:17.810159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.861 [2024-12-16 12:41:17.810164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.861 [2024-12-16 12:41:17.810170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.861 [2024-12-16 12:41:17.810219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:51.861 [2024-12-16 12:41:17.810229] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:51.861 [2024-12-16 12:41:17.811219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:51.861 [2024-12-16 12:41:17.811261] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:51.861 [2024-12-16 12:41:17.811267] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:51.861 [2024-12-16 12:41:17.812220] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:51.861 [2024-12-16 12:41:17.812230] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:51.861 [2024-12-16 12:41:17.812283] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:51.861 [2024-12-16 12:41:17.813246] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:51.861 e Spare Threshold: 0% 00:19:51.861 Life Percentage Used: 0% 00:19:51.861 Data Units Read: 0 00:19:51.861 Data Units Written: 0 00:19:51.862 Host Read Commands: 0 00:19:51.862 Host Write Commands: 0 00:19:51.862 Controller Busy Time: 0 minutes 00:19:51.862 Power Cycles: 0 00:19:51.862 Power On Hours: 0 hours 00:19:51.862 Unsafe Shutdowns: 0 00:19:51.862 Unrecoverable Media Errors: 0 00:19:51.862 Lifetime Error Log Entries: 0 00:19:51.862 Warning Temperature Time: 0 minutes 00:19:51.862 Critical Temperature Time: 0 minutes 00:19:51.862 00:19:51.862 Number of Queues 00:19:51.862 ================ 00:19:51.862 Number of I/O Submission Queues: 127 00:19:51.862 Number of I/O Completion Queues: 127 00:19:51.862 00:19:51.862 Active Namespaces 00:19:51.862 ================= 00:19:51.862 Namespace ID:1 00:19:51.862 Error Recovery Timeout: Unlimited 00:19:51.862 Command Set Identifier: NVM (00h) 00:19:51.862 Deallocate: Supported 00:19:51.862 Deallocated/Unwritten Error: Not Supported 00:19:51.862 Deallocated Read Value: Unknown 00:19:51.862 Deallocate in Write Zeroes: Not Supported 00:19:51.862 Deallocated Guard Field: 0xFFFF 00:19:51.862 Flush: Supported 00:19:51.862 Reservation: Supported 00:19:51.862 Namespace Sharing Capabilities: Multiple Controllers 00:19:51.862 Size (in LBAs): 131072 (0GiB) 00:19:51.862 Capacity (in LBAs): 131072 (0GiB) 00:19:51.862 Utilization (in LBAs): 131072 (0GiB) 00:19:51.862 NGUID: 8E010C25D6D74DA085DCB698CE76A1BA 00:19:51.862 UUID: 8e010c25-d6d7-4da0-85dc-b698ce76a1ba 00:19:51.862 Thin Provisioning: Not Supported 00:19:51.862 Per-NS Atomic Units: Yes 00:19:51.862 Atomic Boundary Size (Normal): 0 00:19:51.862 Atomic Boundary Size (PFail): 0 00:19:51.862 Atomic Boundary Offset: 0 00:19:51.862 Maximum Single Source Range Length: 65535 00:19:51.862 Maximum Copy Length: 65535 00:19:51.862 Maximum Source Range Count: 1 00:19:51.862 NGUID/EUI64 Never Reused: No 00:19:51.862 Namespace Write Protected: No 00:19:51.862 Number of LBA Formats: 1 00:19:51.862 Current LBA Format: LBA Format #00 00:19:51.862 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:51.862 00:19:51.862 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:52.121 [2024-12-16 12:41:18.024292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:57.398 Initializing NVMe Controllers 00:19:57.398 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:57.398 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:57.398 Initialization complete. Launching workers. 00:19:57.398 ======================================================== 00:19:57.398 Latency(us) 00:19:57.398 Device Information : IOPS MiB/s Average min max 00:19:57.398 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.75 156.08 3203.21 938.72 9422.24 00:19:57.398 ======================================================== 00:19:57.398 Total : 39957.75 156.08 3203.21 938.72 9422.24 00:19:57.398 00:19:57.398 [2024-12-16 12:41:23.129365] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:57.398 12:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:57.398 [2024-12-16 12:41:23.347050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:02.674 Initializing NVMe Controllers 00:20:02.674 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:02.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:02.674 Initialization complete. Launching workers. 00:20:02.674 ======================================================== 00:20:02.674 Latency(us) 00:20:02.674 Device Information : IOPS MiB/s Average min max 00:20:02.674 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39923.23 155.95 3206.00 964.44 6686.51 00:20:02.674 ======================================================== 00:20:02.674 Total : 39923.23 155.95 3206.00 964.44 6686.51 00:20:02.674 00:20:02.674 [2024-12-16 12:41:28.367179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:02.674 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:02.674 [2024-12-16 12:41:28.559373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:07.950 [2024-12-16 12:41:33.692214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:07.950 Initializing NVMe Controllers 00:20:07.950 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:07.950 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:07.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:07.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:07.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:07.950 Initialization complete. Launching workers. 00:20:07.950 Starting thread on core 2 00:20:07.950 Starting thread on core 3 00:20:07.950 Starting thread on core 1 00:20:07.950 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:07.950 [2024-12-16 12:41:33.970508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:11.243 [2024-12-16 12:41:37.034111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:11.243 Initializing NVMe Controllers 00:20:11.243 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:11.243 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:11.243 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:11.243 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:11.243 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:11.243 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:11.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:11.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:11.243 Initialization complete. Launching workers. 00:20:11.243 Starting thread on core 1 with urgent priority queue 00:20:11.243 Starting thread on core 2 with urgent priority queue 00:20:11.243 Starting thread on core 3 with urgent priority queue 00:20:11.243 Starting thread on core 0 with urgent priority queue 00:20:11.243 SPDK bdev Controller (SPDK2 ) core 0: 6764.33 IO/s 14.78 secs/100000 ios 00:20:11.243 SPDK bdev Controller (SPDK2 ) core 1: 7891.33 IO/s 12.67 secs/100000 ios 00:20:11.243 SPDK bdev Controller (SPDK2 ) core 2: 7773.00 IO/s 12.87 secs/100000 ios 00:20:11.243 SPDK bdev Controller (SPDK2 ) core 3: 11214.33 IO/s 8.92 secs/100000 ios 00:20:11.243 ======================================================== 00:20:11.243 00:20:11.243 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:11.243 [2024-12-16 12:41:37.302167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:11.502 Initializing NVMe Controllers 00:20:11.502 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:11.502 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:11.502 Namespace ID: 1 size: 0GB 00:20:11.502 Initialization complete. 00:20:11.502 INFO: using host memory buffer for IO 00:20:11.502 Hello world! 00:20:11.502 [2024-12-16 12:41:37.313235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:11.502 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:11.761 [2024-12-16 12:41:37.579851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:12.700 Initializing NVMe Controllers 00:20:12.700 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:12.700 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:12.700 Initialization complete. Launching workers. 00:20:12.700 submit (in ns) avg, min, max = 9094.9, 3116.2, 4002170.5 00:20:12.700 complete (in ns) avg, min, max = 17790.2, 1710.5, 3999710.5 00:20:12.700 00:20:12.700 Submit histogram 00:20:12.700 ================ 00:20:12.700 Range in us Cumulative Count 00:20:12.700 3.109 - 3.124: 0.0060% ( 1) 00:20:12.700 3.124 - 3.139: 0.0239% ( 3) 00:20:12.700 3.139 - 3.154: 0.0418% ( 3) 00:20:12.700 3.154 - 3.170: 0.0598% ( 3) 00:20:12.700 3.170 - 3.185: 0.0956% ( 6) 00:20:12.700 3.185 - 3.200: 0.8607% ( 128) 00:20:12.700 3.200 - 3.215: 3.5981% ( 458) 00:20:12.700 3.215 - 3.230: 7.7521% ( 695) 00:20:12.700 3.230 - 3.246: 13.4720% ( 957) 00:20:12.700 3.246 - 3.261: 19.6402% ( 1032) 00:20:12.700 3.261 - 3.276: 26.4778% ( 1144) 00:20:12.700 3.276 - 3.291: 32.5563% ( 1017) 00:20:12.700 3.291 - 3.307: 38.2523% ( 953) 00:20:12.700 3.307 - 3.322: 43.5180% ( 881) 00:20:12.700 3.322 - 3.337: 49.2858% ( 965) 00:20:12.700 3.337 - 3.352: 54.3542% ( 848) 00:20:12.700 3.352 - 3.368: 60.3969% ( 1011) 00:20:12.700 3.368 - 3.383: 68.3283% ( 1327) 00:20:12.700 3.383 - 3.398: 73.7553% ( 908) 00:20:12.700 3.398 - 3.413: 79.0389% ( 884) 00:20:12.700 3.413 - 3.429: 82.5175% ( 582) 00:20:12.700 3.429 - 3.444: 85.1414% ( 439) 00:20:12.700 3.444 - 3.459: 86.4563% ( 220) 00:20:12.700 3.459 - 3.474: 87.1556% ( 117) 00:20:12.700 3.474 - 3.490: 87.6875% ( 89) 00:20:12.700 3.490 - 3.505: 88.0282% ( 57) 00:20:12.700 3.505 - 3.520: 88.6976% ( 112) 00:20:12.700 3.520 - 3.535: 89.4388% ( 124) 00:20:12.700 3.535 - 3.550: 90.3114% ( 146) 00:20:12.700 3.550 - 3.566: 91.3992% ( 182) 00:20:12.700 3.566 - 3.581: 92.2061% ( 135) 00:20:12.700 3.581 - 3.596: 93.0488% ( 141) 00:20:12.700 3.596 - 3.611: 93.8976% ( 142) 00:20:12.700 3.611 - 3.627: 94.9913% ( 183) 00:20:12.700 3.627 - 3.642: 96.0433% ( 176) 00:20:12.700 3.642 - 3.657: 96.8920% ( 142) 00:20:12.700 3.657 - 3.672: 97.5793% ( 115) 00:20:12.700 3.672 - 3.688: 98.1950% ( 103) 00:20:12.700 3.688 - 3.703: 98.5476% ( 59) 00:20:12.700 3.703 - 3.718: 98.8584% ( 52) 00:20:12.700 3.718 - 3.733: 99.1035% ( 41) 00:20:12.700 3.733 - 3.749: 99.3127% ( 35) 00:20:12.700 3.749 - 3.764: 99.4023% ( 15) 00:20:12.700 3.764 - 3.779: 99.4979% ( 16) 00:20:12.700 3.779 - 3.794: 99.5577% ( 10) 00:20:12.700 3.794 - 3.810: 99.5756% ( 3) 00:20:12.700 3.810 - 3.825: 99.5876% ( 2) 00:20:12.700 3.825 - 3.840: 99.5936% ( 1) 00:20:12.700 3.855 - 3.870: 99.5995% ( 1) 00:20:12.700 4.023 - 4.053: 99.6055% ( 1) 00:20:12.700 4.937 - 4.968: 99.6115% ( 1) 00:20:12.700 4.968 - 4.998: 99.6235% ( 2) 00:20:12.700 4.998 - 5.029: 99.6294% ( 1) 00:20:12.700 5.059 - 5.090: 99.6354% ( 1) 00:20:12.700 5.150 - 5.181: 99.6414% ( 1) 00:20:12.700 5.181 - 5.211: 99.6533% ( 2) 00:20:12.700 5.272 - 5.303: 99.6593% ( 1) 00:20:12.700 5.486 - 5.516: 99.6713% ( 2) 00:20:12.700 5.669 - 5.699: 99.6772% ( 1) 00:20:12.700 5.730 - 5.760: 99.6892% ( 2) 00:20:12.700 5.882 - 5.912: 99.6952% ( 1) 00:20:12.700 6.034 - 6.065: 99.7012% ( 1) 00:20:12.700 6.065 - 6.095: 99.7071% ( 1) 00:20:12.700 6.126 - 6.156: 99.7131% ( 1) 00:20:12.700 6.278 - 6.309: 99.7191% ( 1) 00:20:12.700 6.309 - 6.339: 99.7310% ( 2) 00:20:12.700 6.491 - 6.522: 99.7370% ( 1) 00:20:12.700 6.522 - 6.552: 99.7430% ( 1) 00:20:12.700 6.583 - 6.613: 99.7549% ( 2) 00:20:12.700 6.613 - 6.644: 99.7609% ( 1) 00:20:12.700 6.644 - 6.674: 99.7669% ( 1) 00:20:12.700 6.766 - 6.796: 99.7729% ( 1) 00:20:12.700 6.857 - 6.888: 99.7789% ( 1) 00:20:12.700 6.888 - 6.918: 99.7848% ( 1) 00:20:12.700 7.070 - 7.101: 99.7908% ( 1) 00:20:12.700 7.131 - 7.162: 99.7968% ( 1) 00:20:12.700 7.192 - 7.223: 99.8028% ( 1) 00:20:12.700 7.710 - 7.741: 99.8087% ( 1) 00:20:12.700 7.741 - 7.771: 99.8207% ( 2) 00:20:12.700 [2024-12-16 12:41:38.674078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:12.700 7.924 - 7.985: 99.8267% ( 1) 00:20:12.700 8.411 - 8.472: 99.8326% ( 1) 00:20:12.700 8.899 - 8.960: 99.8386% ( 1) 00:20:12.700 9.874 - 9.935: 99.8446% ( 1) 00:20:12.700 9.935 - 9.996: 99.8506% ( 1) 00:20:12.700 10.910 - 10.971: 99.8566% ( 1) 00:20:12.700 3994.575 - 4025.783: 100.0000% ( 24) 00:20:12.700 00:20:12.700 Complete histogram 00:20:12.701 ================== 00:20:12.701 Range in us Cumulative Count 00:20:12.701 1.707 - 1.714: 0.0060% ( 1) 00:20:12.701 1.714 - 1.722: 0.0657% ( 10) 00:20:12.701 1.722 - 1.730: 0.1494% ( 14) 00:20:12.701 1.730 - 1.737: 0.2032% ( 9) 00:20:12.701 1.737 - 1.745: 0.2092% ( 1) 00:20:12.701 1.745 - 1.752: 0.2211% ( 2) 00:20:12.701 1.752 - 1.760: 0.8129% ( 99) 00:20:12.701 1.760 - 1.768: 9.1387% ( 1393) 00:20:12.701 1.768 - 1.775: 35.7361% ( 4450) 00:20:12.701 1.775 - 1.783: 60.1279% ( 4081) 00:20:12.701 1.783 - 1.790: 68.2207% ( 1354) 00:20:12.701 1.790 - 1.798: 70.1512% ( 323) 00:20:12.701 1.798 - 1.806: 71.6215% ( 246) 00:20:12.701 1.806 - 1.813: 74.8431% ( 539) 00:20:12.701 1.813 - 1.821: 83.4320% ( 1437) 00:20:12.701 1.821 - 1.829: 91.8654% ( 1411) 00:20:12.701 1.829 - 1.836: 95.7384% ( 648) 00:20:12.701 1.836 - 1.844: 96.9099% ( 196) 00:20:12.701 1.844 - 1.851: 97.6331% ( 121) 00:20:12.701 1.851 - 1.859: 98.2069% ( 96) 00:20:12.701 1.859 - 1.867: 98.5894% ( 64) 00:20:12.701 1.867 - 1.874: 98.8106% ( 37) 00:20:12.701 1.874 - 1.882: 98.9839% ( 29) 00:20:12.701 1.882 - 1.890: 99.0915% ( 18) 00:20:12.701 1.890 - 1.897: 99.1871% ( 16) 00:20:12.701 1.897 - 1.905: 99.2589% ( 12) 00:20:12.701 1.905 - 1.912: 99.2828% ( 4) 00:20:12.701 1.912 - 1.920: 99.3067% ( 4) 00:20:12.701 1.920 - 1.928: 99.3127% ( 1) 00:20:12.701 1.928 - 1.935: 99.3724% ( 10) 00:20:12.701 1.935 - 1.943: 99.4202% ( 8) 00:20:12.701 1.943 - 1.950: 99.4322% ( 2) 00:20:12.701 1.966 - 1.981: 99.4382% ( 1) 00:20:12.701 1.981 - 1.996: 99.4501% ( 2) 00:20:12.701 1.996 - 2.011: 99.4621% ( 2) 00:20:12.701 2.042 - 2.057: 99.4681% ( 1) 00:20:12.701 2.057 - 2.072: 99.4740% ( 1) 00:20:12.701 2.072 - 2.088: 99.4800% ( 1) 00:20:12.701 3.703 - 3.718: 99.4860% ( 1) 00:20:12.701 3.840 - 3.855: 99.4920% ( 1) 00:20:12.701 4.145 - 4.175: 99.4979% ( 1) 00:20:12.701 4.358 - 4.389: 99.5099% ( 2) 00:20:12.701 4.480 - 4.510: 99.5218% ( 2) 00:20:12.701 4.571 - 4.602: 99.5278% ( 1) 00:20:12.701 4.602 - 4.632: 99.5338% ( 1) 00:20:12.701 4.724 - 4.754: 99.5398% ( 1) 00:20:12.701 5.181 - 5.211: 99.5458% ( 1) 00:20:12.701 5.669 - 5.699: 99.5517% ( 1) 00:20:12.701 5.851 - 5.882: 99.5637% ( 2) 00:20:12.701 6.126 - 6.156: 99.5697% ( 1) 00:20:12.701 6.583 - 6.613: 99.5756% ( 1) 00:20:12.701 9.752 - 9.813: 99.5816% ( 1) 00:20:12.701 13.836 - 13.897: 99.5876% ( 1) 00:20:12.701 13.897 - 13.958: 99.5936% ( 1) 00:20:12.701 15.726 - 15.848: 99.5995% ( 1) 00:20:12.701 3994.575 - 4025.783: 100.0000% ( 67) 00:20:12.701 00:20:12.701 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:12.701 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:12.701 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:12.701 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:12.701 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:12.960 [ 00:20:12.960 { 00:20:12.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:12.960 "subtype": "Discovery", 00:20:12.960 "listen_addresses": [], 00:20:12.960 "allow_any_host": true, 00:20:12.960 "hosts": [] 00:20:12.960 }, 00:20:12.960 { 00:20:12.960 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:12.960 "subtype": "NVMe", 00:20:12.960 "listen_addresses": [ 00:20:12.960 { 00:20:12.960 "trtype": "VFIOUSER", 00:20:12.960 "adrfam": "IPv4", 00:20:12.960 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:12.960 "trsvcid": "0" 00:20:12.960 } 00:20:12.960 ], 00:20:12.960 "allow_any_host": true, 00:20:12.960 "hosts": [], 00:20:12.960 "serial_number": "SPDK1", 00:20:12.960 "model_number": "SPDK bdev Controller", 00:20:12.960 "max_namespaces": 32, 00:20:12.960 "min_cntlid": 1, 00:20:12.960 "max_cntlid": 65519, 00:20:12.960 "namespaces": [ 00:20:12.960 { 00:20:12.960 "nsid": 1, 00:20:12.960 "bdev_name": "Malloc1", 00:20:12.960 "name": "Malloc1", 00:20:12.960 "nguid": "6E6B4C4A28AF490D8A0D375F863E0A8C", 00:20:12.960 "uuid": "6e6b4c4a-28af-490d-8a0d-375f863e0a8c" 00:20:12.960 }, 00:20:12.960 { 00:20:12.960 "nsid": 2, 00:20:12.960 "bdev_name": "Malloc3", 00:20:12.960 "name": "Malloc3", 00:20:12.960 "nguid": "530063F34FEC48559176A978ADBEF5AF", 00:20:12.960 "uuid": "530063f3-4fec-4855-9176-a978adbef5af" 00:20:12.960 } 00:20:12.960 ] 00:20:12.960 }, 00:20:12.960 { 00:20:12.960 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:12.960 "subtype": "NVMe", 00:20:12.960 "listen_addresses": [ 00:20:12.960 { 00:20:12.960 "trtype": "VFIOUSER", 00:20:12.960 "adrfam": "IPv4", 00:20:12.960 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:12.960 "trsvcid": "0" 00:20:12.960 } 00:20:12.960 ], 00:20:12.960 "allow_any_host": true, 00:20:12.960 "hosts": [], 00:20:12.961 "serial_number": "SPDK2", 00:20:12.961 "model_number": "SPDK bdev Controller", 00:20:12.961 "max_namespaces": 32, 00:20:12.961 "min_cntlid": 1, 00:20:12.961 "max_cntlid": 65519, 00:20:12.961 "namespaces": [ 00:20:12.961 { 00:20:12.961 "nsid": 1, 00:20:12.961 "bdev_name": "Malloc2", 00:20:12.961 "name": "Malloc2", 00:20:12.961 "nguid": "8E010C25D6D74DA085DCB698CE76A1BA", 00:20:12.961 "uuid": "8e010c25-d6d7-4da0-85dc-b698ce76a1ba" 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=352275 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:20:12.961 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:12.961 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:12.961 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:12.961 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:20:12.961 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:13.220 [2024-12-16 12:41:39.057517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:13.220 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:13.220 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:13.220 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:13.220 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:13.220 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:13.479 Malloc4 00:20:13.479 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:13.479 [2024-12-16 12:41:39.535157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:13.738 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:13.738 Asynchronous Event Request test 00:20:13.738 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:13.738 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:13.738 Registering asynchronous event callbacks... 00:20:13.738 Starting namespace attribute notice tests for all controllers... 00:20:13.738 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:13.738 aer_cb - Changed Namespace 00:20:13.738 Cleaning up... 00:20:13.738 [ 00:20:13.738 { 00:20:13.738 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:13.738 "subtype": "Discovery", 00:20:13.738 "listen_addresses": [], 00:20:13.738 "allow_any_host": true, 00:20:13.738 "hosts": [] 00:20:13.738 }, 00:20:13.738 { 00:20:13.738 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:13.738 "subtype": "NVMe", 00:20:13.738 "listen_addresses": [ 00:20:13.738 { 00:20:13.738 "trtype": "VFIOUSER", 00:20:13.738 "adrfam": "IPv4", 00:20:13.738 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:13.738 "trsvcid": "0" 00:20:13.738 } 00:20:13.738 ], 00:20:13.738 "allow_any_host": true, 00:20:13.738 "hosts": [], 00:20:13.738 "serial_number": "SPDK1", 00:20:13.738 "model_number": "SPDK bdev Controller", 00:20:13.738 "max_namespaces": 32, 00:20:13.738 "min_cntlid": 1, 00:20:13.738 "max_cntlid": 65519, 00:20:13.738 "namespaces": [ 00:20:13.738 { 00:20:13.738 "nsid": 1, 00:20:13.738 "bdev_name": "Malloc1", 00:20:13.738 "name": "Malloc1", 00:20:13.738 "nguid": "6E6B4C4A28AF490D8A0D375F863E0A8C", 00:20:13.738 "uuid": "6e6b4c4a-28af-490d-8a0d-375f863e0a8c" 00:20:13.738 }, 00:20:13.738 { 00:20:13.738 "nsid": 2, 00:20:13.738 "bdev_name": "Malloc3", 00:20:13.738 "name": "Malloc3", 00:20:13.738 "nguid": "530063F34FEC48559176A978ADBEF5AF", 00:20:13.738 "uuid": "530063f3-4fec-4855-9176-a978adbef5af" 00:20:13.738 } 00:20:13.738 ] 00:20:13.738 }, 00:20:13.738 { 00:20:13.738 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:13.738 "subtype": "NVMe", 00:20:13.738 "listen_addresses": [ 00:20:13.738 { 00:20:13.738 "trtype": "VFIOUSER", 00:20:13.738 "adrfam": "IPv4", 00:20:13.738 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:13.738 "trsvcid": "0" 00:20:13.738 } 00:20:13.738 ], 00:20:13.738 "allow_any_host": true, 00:20:13.738 "hosts": [], 00:20:13.738 "serial_number": "SPDK2", 00:20:13.738 "model_number": "SPDK bdev Controller", 00:20:13.738 "max_namespaces": 32, 00:20:13.738 "min_cntlid": 1, 00:20:13.738 "max_cntlid": 65519, 00:20:13.738 "namespaces": [ 00:20:13.738 { 00:20:13.738 "nsid": 1, 00:20:13.738 "bdev_name": "Malloc2", 00:20:13.738 "name": "Malloc2", 00:20:13.738 "nguid": "8E010C25D6D74DA085DCB698CE76A1BA", 00:20:13.738 "uuid": "8e010c25-d6d7-4da0-85dc-b698ce76a1ba" 00:20:13.738 }, 00:20:13.738 { 00:20:13.738 "nsid": 2, 00:20:13.738 "bdev_name": "Malloc4", 00:20:13.738 "name": "Malloc4", 00:20:13.738 "nguid": "AE4278DB44CE40E9BD1E082087343E50", 00:20:13.738 "uuid": "ae4278db-44ce-40e9-bd1e-082087343e50" 00:20:13.738 } 00:20:13.738 ] 00:20:13.738 } 00:20:13.738 ] 00:20:13.738 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 352275 00:20:13.738 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:13.738 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 344894 00:20:13.738 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 344894 ']' 00:20:13.738 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 344894 00:20:13.739 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:13.739 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.739 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 344894 00:20:13.739 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:13.998 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:13.998 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 344894' 00:20:13.998 killing process with pid 344894 00:20:13.998 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 344894 00:20:13.998 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 344894 00:20:13.998 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=352509 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 352509' 00:20:14.258 Process pid: 352509 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 352509 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 352509 ']' 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.258 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:14.258 [2024-12-16 12:41:40.115887] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:14.258 [2024-12-16 12:41:40.116735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:14.258 [2024-12-16 12:41:40.116771] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.258 [2024-12-16 12:41:40.184251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.258 [2024-12-16 12:41:40.223227] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.258 [2024-12-16 12:41:40.223265] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.258 [2024-12-16 12:41:40.223273] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.258 [2024-12-16 12:41:40.223279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.258 [2024-12-16 12:41:40.223284] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.258 [2024-12-16 12:41:40.223396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.258 [2024-12-16 12:41:40.223502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.258 [2024-12-16 12:41:40.223609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.258 [2024-12-16 12:41:40.223611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.258 [2024-12-16 12:41:40.300915] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:14.258 [2024-12-16 12:41:40.301348] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:14.258 [2024-12-16 12:41:40.301628] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:14.258 [2024-12-16 12:41:40.301860] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:14.258 [2024-12-16 12:41:40.302402] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:14.517 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.517 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:20:14.517 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:15.456 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:15.716 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:15.716 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:15.716 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:15.716 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:15.716 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:15.716 Malloc1 00:20:15.716 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:15.975 12:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:16.234 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:16.494 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:16.494 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:16.494 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:16.753 Malloc2 00:20:16.753 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:16.753 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:17.011 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 352509 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 352509 ']' 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 352509 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 352509 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 352509' 00:20:17.271 killing process with pid 352509 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 352509 00:20:17.271 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 352509 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:17.530 00:20:17.530 real 0m50.981s 00:20:17.530 user 3m17.107s 00:20:17.530 sys 0m3.270s 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:17.530 ************************************ 00:20:17.530 END TEST nvmf_vfio_user 00:20:17.530 ************************************ 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:17.530 ************************************ 00:20:17.530 START TEST nvmf_vfio_user_nvme_compliance 00:20:17.530 ************************************ 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:17.530 * Looking for test storage... 00:20:17.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:20:17.530 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.791 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:17.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.792 --rc genhtml_branch_coverage=1 00:20:17.792 --rc genhtml_function_coverage=1 00:20:17.792 --rc genhtml_legend=1 00:20:17.792 --rc geninfo_all_blocks=1 00:20:17.792 --rc geninfo_unexecuted_blocks=1 00:20:17.792 00:20:17.792 ' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:17.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.792 --rc genhtml_branch_coverage=1 00:20:17.792 --rc genhtml_function_coverage=1 00:20:17.792 --rc genhtml_legend=1 00:20:17.792 --rc geninfo_all_blocks=1 00:20:17.792 --rc geninfo_unexecuted_blocks=1 00:20:17.792 00:20:17.792 ' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:17.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.792 --rc genhtml_branch_coverage=1 00:20:17.792 --rc genhtml_function_coverage=1 00:20:17.792 --rc genhtml_legend=1 00:20:17.792 --rc geninfo_all_blocks=1 00:20:17.792 --rc geninfo_unexecuted_blocks=1 00:20:17.792 00:20:17.792 ' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:17.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.792 --rc genhtml_branch_coverage=1 00:20:17.792 --rc genhtml_function_coverage=1 00:20:17.792 --rc genhtml_legend=1 00:20:17.792 --rc geninfo_all_blocks=1 00:20:17.792 --rc geninfo_unexecuted_blocks=1 00:20:17.792 00:20:17.792 ' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=353188 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 353188' 00:20:17.792 Process pid: 353188 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 353188 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 353188 ']' 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.792 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:17.792 [2024-12-16 12:41:43.739075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:17.792 [2024-12-16 12:41:43.739134] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.792 [2024-12-16 12:41:43.805551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:17.793 [2024-12-16 12:41:43.844918] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.793 [2024-12-16 12:41:43.844959] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.793 [2024-12-16 12:41:43.844966] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.793 [2024-12-16 12:41:43.844972] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.793 [2024-12-16 12:41:43.844976] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.793 [2024-12-16 12:41:43.845029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.793 [2024-12-16 12:41:43.845118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.793 [2024-12-16 12:41:43.845112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.053 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.053 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:20:18.053 12:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:18.991 malloc0 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.991 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:18.991 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.992 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:19.250 00:20:19.250 00:20:19.250 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.250 http://cunit.sourceforge.net/ 00:20:19.250 00:20:19.250 00:20:19.250 Suite: nvme_compliance 00:20:19.250 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-16 12:41:45.169561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:19.250 [2024-12-16 12:41:45.170888] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:19.250 [2024-12-16 12:41:45.170903] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:19.250 [2024-12-16 12:41:45.170909] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:19.250 [2024-12-16 12:41:45.173591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:19.250 passed 00:20:19.250 Test: admin_identify_ctrlr_verify_fused ...[2024-12-16 12:41:45.253165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:19.250 [2024-12-16 12:41:45.256183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:19.250 passed 00:20:19.509 Test: admin_identify_ns ...[2024-12-16 12:41:45.335494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:19.509 [2024-12-16 12:41:45.396128] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:19.509 [2024-12-16 12:41:45.404120] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:19.509 [2024-12-16 12:41:45.425211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:19.509 passed 00:20:19.509 Test: admin_get_features_mandatory_features ...[2024-12-16 12:41:45.497783] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:19.509 [2024-12-16 12:41:45.502810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:19.509 passed 00:20:19.768 Test: admin_get_features_optional_features ...[2024-12-16 12:41:45.576295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:19.768 [2024-12-16 12:41:45.579318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:19.768 passed 00:20:19.768 Test: admin_set_features_number_of_queues ...[2024-12-16 12:41:45.655343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:19.768 [2024-12-16 12:41:45.764203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:19.768 passed 00:20:20.027 Test: admin_get_log_page_mandatory_logs ...[2024-12-16 12:41:45.837391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.027 [2024-12-16 12:41:45.840412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.027 passed 00:20:20.027 Test: admin_get_log_page_with_lpo ...[2024-12-16 12:41:45.915020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.027 [2024-12-16 12:41:45.984134] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:20.027 [2024-12-16 12:41:45.997181] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.027 passed 00:20:20.027 Test: fabric_property_get ...[2024-12-16 12:41:46.071662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.027 [2024-12-16 12:41:46.072896] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:20.027 [2024-12-16 12:41:46.074684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.286 passed 00:20:20.286 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-16 12:41:46.150193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.286 [2024-12-16 12:41:46.151425] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:20.286 [2024-12-16 12:41:46.153216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.286 passed 00:20:20.286 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-16 12:41:46.230868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.286 [2024-12-16 12:41:46.315130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:20.286 [2024-12-16 12:41:46.331121] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:20.286 [2024-12-16 12:41:46.336198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.546 passed 00:20:20.546 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-16 12:41:46.409961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.546 [2024-12-16 12:41:46.411202] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:20.546 [2024-12-16 12:41:46.412985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.546 passed 00:20:20.546 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-16 12:41:46.492317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.546 [2024-12-16 12:41:46.569126] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:20.546 [2024-12-16 12:41:46.593121] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:20.546 [2024-12-16 12:41:46.598210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.805 passed 00:20:20.805 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-16 12:41:46.669942] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.805 [2024-12-16 12:41:46.671178] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:20.805 [2024-12-16 12:41:46.671202] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:20.805 [2024-12-16 12:41:46.675985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.805 passed 00:20:20.805 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-16 12:41:46.748561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.805 [2024-12-16 12:41:46.841121] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:20.805 [2024-12-16 12:41:46.849117] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:20.805 [2024-12-16 12:41:46.857128] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:20.805 [2024-12-16 12:41:46.865127] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:21.064 [2024-12-16 12:41:46.894206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:21.064 passed 00:20:21.064 Test: admin_create_io_sq_verify_pc ...[2024-12-16 12:41:46.967715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:21.064 [2024-12-16 12:41:46.979128] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:21.064 [2024-12-16 12:41:46.996884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:21.064 passed 00:20:21.064 Test: admin_create_io_qp_max_qps ...[2024-12-16 12:41:47.070370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.444 [2024-12-16 12:41:48.167124] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:22.703 [2024-12-16 12:41:48.546996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.703 passed 00:20:22.703 Test: admin_create_io_sq_shared_cq ...[2024-12-16 12:41:48.623756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.703 [2024-12-16 12:41:48.755119] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:22.962 [2024-12-16 12:41:48.792175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.962 passed 00:20:22.962 00:20:22.962 Run Summary: Type Total Ran Passed Failed Inactive 00:20:22.962 suites 1 1 n/a 0 0 00:20:22.962 tests 18 18 18 0 0 00:20:22.962 asserts 360 360 360 0 n/a 00:20:22.962 00:20:22.962 Elapsed time = 1.486 seconds 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 353188 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 353188 ']' 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 353188 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 353188 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 353188' 00:20:22.962 killing process with pid 353188 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 353188 00:20:22.962 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 353188 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:23.222 00:20:23.222 real 0m5.597s 00:20:23.222 user 0m15.631s 00:20:23.222 sys 0m0.510s 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.222 ************************************ 00:20:23.222 END TEST nvmf_vfio_user_nvme_compliance 00:20:23.222 ************************************ 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.222 ************************************ 00:20:23.222 START TEST nvmf_vfio_user_fuzz 00:20:23.222 ************************************ 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:23.222 * Looking for test storage... 00:20:23.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.222 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.483 --rc genhtml_branch_coverage=1 00:20:23.483 --rc genhtml_function_coverage=1 00:20:23.483 --rc genhtml_legend=1 00:20:23.483 --rc geninfo_all_blocks=1 00:20:23.483 --rc geninfo_unexecuted_blocks=1 00:20:23.483 00:20:23.483 ' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.483 --rc genhtml_branch_coverage=1 00:20:23.483 --rc genhtml_function_coverage=1 00:20:23.483 --rc genhtml_legend=1 00:20:23.483 --rc geninfo_all_blocks=1 00:20:23.483 --rc geninfo_unexecuted_blocks=1 00:20:23.483 00:20:23.483 ' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.483 --rc genhtml_branch_coverage=1 00:20:23.483 --rc genhtml_function_coverage=1 00:20:23.483 --rc genhtml_legend=1 00:20:23.483 --rc geninfo_all_blocks=1 00:20:23.483 --rc geninfo_unexecuted_blocks=1 00:20:23.483 00:20:23.483 ' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.483 --rc genhtml_branch_coverage=1 00:20:23.483 --rc genhtml_function_coverage=1 00:20:23.483 --rc genhtml_legend=1 00:20:23.483 --rc geninfo_all_blocks=1 00:20:23.483 --rc geninfo_unexecuted_blocks=1 00:20:23.483 00:20:23.483 ' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.483 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=354175 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 354175' 00:20:23.484 Process pid: 354175 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 354175 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 354175 ']' 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.484 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:23.743 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.743 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:23.743 12:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 malloc0 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:24.680 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:56.769 Fuzzing completed. Shutting down the fuzz application 00:20:56.769 00:20:56.769 Dumping successful admin opcodes: 00:20:56.769 8, 9, 10, 24, 00:20:56.769 Dumping successful io opcodes: 00:20:56.769 0, 00:20:56.769 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1138187, total successful commands: 4483, random_seed: 3879266048 00:20:56.769 NS: 0x200003a1ef00 admin qp, Total commands completed: 279337, total successful commands: 2249, random_seed: 275145088 00:20:56.769 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:56.769 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.769 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 354175 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 354175 ']' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 354175 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 354175 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 354175' 00:20:56.770 killing process with pid 354175 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 354175 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 354175 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:56.770 00:20:56.770 real 0m32.252s 00:20:56.770 user 0m34.807s 00:20:56.770 sys 0m26.334s 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.770 ************************************ 00:20:56.770 END TEST nvmf_vfio_user_fuzz 00:20:56.770 ************************************ 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.770 ************************************ 00:20:56.770 START TEST nvmf_auth_target 00:20:56.770 ************************************ 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:56.770 * Looking for test storage... 00:20:56.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:56.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.770 --rc genhtml_branch_coverage=1 00:20:56.770 --rc genhtml_function_coverage=1 00:20:56.770 --rc genhtml_legend=1 00:20:56.770 --rc geninfo_all_blocks=1 00:20:56.770 --rc geninfo_unexecuted_blocks=1 00:20:56.770 00:20:56.770 ' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:56.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.770 --rc genhtml_branch_coverage=1 00:20:56.770 --rc genhtml_function_coverage=1 00:20:56.770 --rc genhtml_legend=1 00:20:56.770 --rc geninfo_all_blocks=1 00:20:56.770 --rc geninfo_unexecuted_blocks=1 00:20:56.770 00:20:56.770 ' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:56.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.770 --rc genhtml_branch_coverage=1 00:20:56.770 --rc genhtml_function_coverage=1 00:20:56.770 --rc genhtml_legend=1 00:20:56.770 --rc geninfo_all_blocks=1 00:20:56.770 --rc geninfo_unexecuted_blocks=1 00:20:56.770 00:20:56.770 ' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:56.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.770 --rc genhtml_branch_coverage=1 00:20:56.770 --rc genhtml_function_coverage=1 00:20:56.770 --rc genhtml_legend=1 00:20:56.770 --rc geninfo_all_blocks=1 00:20:56.770 --rc geninfo_unexecuted_blocks=1 00:20:56.770 00:20:56.770 ' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.770 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.771 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:02.050 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:02.050 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:02.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:02.051 Found net devices under 0000:af:00.0: cvl_0_0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:02.051 Found net devices under 0000:af:00.1: cvl_0_1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:02.051 00:21:02.051 --- 10.0.0.2 ping statistics --- 00:21:02.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.051 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:02.051 00:21:02.051 --- 10.0.0.1 ping statistics --- 00:21:02.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.051 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=362798 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 362798 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 362798 ']' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=362818 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4b74b887017bf175040ceeaea94be0ecf342207da8c1ed81 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Df1 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4b74b887017bf175040ceeaea94be0ecf342207da8c1ed81 0 00:21:02.051 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4b74b887017bf175040ceeaea94be0ecf342207da8c1ed81 0 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4b74b887017bf175040ceeaea94be0ecf342207da8c1ed81 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Df1 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Df1 00:21:02.052 12:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Df1 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a3e4db67681acbac1451a7694cb4c07df60775d67e07e2dafcae79ae9883537f 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.M8p 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a3e4db67681acbac1451a7694cb4c07df60775d67e07e2dafcae79ae9883537f 3 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a3e4db67681acbac1451a7694cb4c07df60775d67e07e2dafcae79ae9883537f 3 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a3e4db67681acbac1451a7694cb4c07df60775d67e07e2dafcae79ae9883537f 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.M8p 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.M8p 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.M8p 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8eb25e508e7be0ae02a3814dcff527ed 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.9pO 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8eb25e508e7be0ae02a3814dcff527ed 1 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8eb25e508e7be0ae02a3814dcff527ed 1 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8eb25e508e7be0ae02a3814dcff527ed 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:21:02.052 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.9pO 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.9pO 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.9pO 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e8e208b1aea6781875bf5c76d10d5ac721671d32ace40152 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.iAd 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e8e208b1aea6781875bf5c76d10d5ac721671d32ace40152 2 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e8e208b1aea6781875bf5c76d10d5ac721671d32ace40152 2 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e8e208b1aea6781875bf5c76d10d5ac721671d32ace40152 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.iAd 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.iAd 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.iAd 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.311 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e3af1f7d895f11569cd8f5cf4c7bbadab0a15ee460774f8d 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.DXi 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e3af1f7d895f11569cd8f5cf4c7bbadab0a15ee460774f8d 2 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e3af1f7d895f11569cd8f5cf4c7bbadab0a15ee460774f8d 2 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e3af1f7d895f11569cd8f5cf4c7bbadab0a15ee460774f8d 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.DXi 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.DXi 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.DXi 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5f35fb01ef86ed88cc0d80f00fb36ed9 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.8hU 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5f35fb01ef86ed88cc0d80f00fb36ed9 1 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5f35fb01ef86ed88cc0d80f00fb36ed9 1 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5f35fb01ef86ed88cc0d80f00fb36ed9 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.8hU 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.8hU 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.8hU 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b7db0d868e70e0200a99e125e3a81d85813a1ec88599f60dd38f2f9f3d1840df 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.2JX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b7db0d868e70e0200a99e125e3a81d85813a1ec88599f60dd38f2f9f3d1840df 3 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b7db0d868e70e0200a99e125e3a81d85813a1ec88599f60dd38f2f9f3d1840df 3 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b7db0d868e70e0200a99e125e3a81d85813a1ec88599f60dd38f2f9f3d1840df 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.2JX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.2JX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2JX 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 362798 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 362798 ']' 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.312 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 362818 /var/tmp/host.sock 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 362818 ']' 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:02.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.572 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Df1 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Df1 00:21:02.831 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Df1 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.M8p ]] 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.M8p 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.M8p 00:21:03.090 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.M8p 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9pO 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9pO 00:21:03.350 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9pO 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.iAd ]] 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iAd 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iAd 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iAd 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DXi 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DXi 00:21:03.609 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DXi 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.8hU ]] 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8hU 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8hU 00:21:03.868 12:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8hU 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2JX 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2JX 00:21:04.128 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2JX 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.387 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.647 00:21:04.647 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.647 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.647 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.913 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.913 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.913 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.913 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.913 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.913 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.913 { 00:21:04.913 "cntlid": 1, 00:21:04.913 "qid": 0, 00:21:04.913 "state": "enabled", 00:21:04.913 "thread": "nvmf_tgt_poll_group_000", 00:21:04.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:04.913 "listen_address": { 00:21:04.914 "trtype": "TCP", 00:21:04.914 "adrfam": "IPv4", 00:21:04.914 "traddr": "10.0.0.2", 00:21:04.914 "trsvcid": "4420" 00:21:04.914 }, 00:21:04.914 "peer_address": { 00:21:04.914 "trtype": "TCP", 00:21:04.914 "adrfam": "IPv4", 00:21:04.914 "traddr": "10.0.0.1", 00:21:04.914 "trsvcid": "36390" 00:21:04.914 }, 00:21:04.914 "auth": { 00:21:04.914 "state": "completed", 00:21:04.914 "digest": "sha256", 00:21:04.914 "dhgroup": "null" 00:21:04.914 } 00:21:04.914 } 00:21:04.914 ]' 00:21:04.914 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.914 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.914 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.174 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:05.174 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.174 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.174 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.174 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.433 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:05.433 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.724 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.983 00:21:08.983 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.983 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.984 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.244 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.245 { 00:21:09.245 "cntlid": 3, 00:21:09.245 "qid": 0, 00:21:09.245 "state": "enabled", 00:21:09.245 "thread": "nvmf_tgt_poll_group_000", 00:21:09.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:09.245 "listen_address": { 00:21:09.245 "trtype": "TCP", 00:21:09.245 "adrfam": "IPv4", 00:21:09.245 "traddr": "10.0.0.2", 00:21:09.245 "trsvcid": "4420" 00:21:09.245 }, 00:21:09.245 "peer_address": { 00:21:09.245 "trtype": "TCP", 00:21:09.245 "adrfam": "IPv4", 00:21:09.245 "traddr": "10.0.0.1", 00:21:09.245 "trsvcid": "36418" 00:21:09.245 }, 00:21:09.245 "auth": { 00:21:09.245 "state": "completed", 00:21:09.245 "digest": "sha256", 00:21:09.245 "dhgroup": "null" 00:21:09.245 } 00:21:09.245 } 00:21:09.245 ]' 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.245 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.505 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:09.505 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.505 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.505 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.505 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.764 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:09.764 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:10.333 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.334 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.593 00:21:10.593 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.593 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.593 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.852 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.852 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.852 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.852 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.852 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.852 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.852 { 00:21:10.852 "cntlid": 5, 00:21:10.852 "qid": 0, 00:21:10.852 "state": "enabled", 00:21:10.852 "thread": "nvmf_tgt_poll_group_000", 00:21:10.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:10.852 "listen_address": { 00:21:10.852 "trtype": "TCP", 00:21:10.852 "adrfam": "IPv4", 00:21:10.853 "traddr": "10.0.0.2", 00:21:10.853 "trsvcid": "4420" 00:21:10.853 }, 00:21:10.853 "peer_address": { 00:21:10.853 "trtype": "TCP", 00:21:10.853 "adrfam": "IPv4", 00:21:10.853 "traddr": "10.0.0.1", 00:21:10.853 "trsvcid": "36442" 00:21:10.853 }, 00:21:10.853 "auth": { 00:21:10.853 "state": "completed", 00:21:10.853 "digest": "sha256", 00:21:10.853 "dhgroup": "null" 00:21:10.853 } 00:21:10.853 } 00:21:10.853 ]' 00:21:10.853 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.853 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.853 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.853 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.853 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.112 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.112 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.113 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.113 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:11.113 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:11.682 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:11.942 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:11.942 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.942 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.942 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.942 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.943 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.202 00:21:12.202 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.202 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.202 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.462 { 00:21:12.462 "cntlid": 7, 00:21:12.462 "qid": 0, 00:21:12.462 "state": "enabled", 00:21:12.462 "thread": "nvmf_tgt_poll_group_000", 00:21:12.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:12.462 "listen_address": { 00:21:12.462 "trtype": "TCP", 00:21:12.462 "adrfam": "IPv4", 00:21:12.462 "traddr": "10.0.0.2", 00:21:12.462 "trsvcid": "4420" 00:21:12.462 }, 00:21:12.462 "peer_address": { 00:21:12.462 "trtype": "TCP", 00:21:12.462 "adrfam": "IPv4", 00:21:12.462 "traddr": "10.0.0.1", 00:21:12.462 "trsvcid": "40956" 00:21:12.462 }, 00:21:12.462 "auth": { 00:21:12.462 "state": "completed", 00:21:12.462 "digest": "sha256", 00:21:12.462 "dhgroup": "null" 00:21:12.462 } 00:21:12.462 } 00:21:12.462 ]' 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.462 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.722 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.722 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.722 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.722 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:12.722 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:13.290 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.291 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.550 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.810 00:21:13.810 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.810 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.810 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.070 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.070 { 00:21:14.070 "cntlid": 9, 00:21:14.070 "qid": 0, 00:21:14.070 "state": "enabled", 00:21:14.070 "thread": "nvmf_tgt_poll_group_000", 00:21:14.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:14.070 "listen_address": { 00:21:14.070 "trtype": "TCP", 00:21:14.070 "adrfam": "IPv4", 00:21:14.070 "traddr": "10.0.0.2", 00:21:14.070 "trsvcid": "4420" 00:21:14.070 }, 00:21:14.070 "peer_address": { 00:21:14.070 "trtype": "TCP", 00:21:14.070 "adrfam": "IPv4", 00:21:14.070 "traddr": "10.0.0.1", 00:21:14.070 "trsvcid": "40990" 00:21:14.070 }, 00:21:14.070 "auth": { 00:21:14.070 "state": "completed", 00:21:14.070 "digest": "sha256", 00:21:14.070 "dhgroup": "ffdhe2048" 00:21:14.070 } 00:21:14.070 } 00:21:14.070 ]' 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.070 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.329 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.329 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.329 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.329 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:14.329 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:14.898 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.158 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.418 00:21:15.418 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.418 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.418 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.677 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.677 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.678 { 00:21:15.678 "cntlid": 11, 00:21:15.678 "qid": 0, 00:21:15.678 "state": "enabled", 00:21:15.678 "thread": "nvmf_tgt_poll_group_000", 00:21:15.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:15.678 "listen_address": { 00:21:15.678 "trtype": "TCP", 00:21:15.678 "adrfam": "IPv4", 00:21:15.678 "traddr": "10.0.0.2", 00:21:15.678 "trsvcid": "4420" 00:21:15.678 }, 00:21:15.678 "peer_address": { 00:21:15.678 "trtype": "TCP", 00:21:15.678 "adrfam": "IPv4", 00:21:15.678 "traddr": "10.0.0.1", 00:21:15.678 "trsvcid": "41020" 00:21:15.678 }, 00:21:15.678 "auth": { 00:21:15.678 "state": "completed", 00:21:15.678 "digest": "sha256", 00:21:15.678 "dhgroup": "ffdhe2048" 00:21:15.678 } 00:21:15.678 } 00:21:15.678 ]' 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.678 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.938 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:15.938 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:16.506 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.766 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.025 00:21:17.025 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.025 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.025 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.285 { 00:21:17.285 "cntlid": 13, 00:21:17.285 "qid": 0, 00:21:17.285 "state": "enabled", 00:21:17.285 "thread": "nvmf_tgt_poll_group_000", 00:21:17.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:17.285 "listen_address": { 00:21:17.285 "trtype": "TCP", 00:21:17.285 "adrfam": "IPv4", 00:21:17.285 "traddr": "10.0.0.2", 00:21:17.285 "trsvcid": "4420" 00:21:17.285 }, 00:21:17.285 "peer_address": { 00:21:17.285 "trtype": "TCP", 00:21:17.285 "adrfam": "IPv4", 00:21:17.285 "traddr": "10.0.0.1", 00:21:17.285 "trsvcid": "41050" 00:21:17.285 }, 00:21:17.285 "auth": { 00:21:17.285 "state": "completed", 00:21:17.285 "digest": "sha256", 00:21:17.285 "dhgroup": "ffdhe2048" 00:21:17.285 } 00:21:17.285 } 00:21:17.285 ]' 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.285 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.545 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:17.545 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.114 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:18.115 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.374 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.634 00:21:18.634 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.634 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.634 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.894 { 00:21:18.894 "cntlid": 15, 00:21:18.894 "qid": 0, 00:21:18.894 "state": "enabled", 00:21:18.894 "thread": "nvmf_tgt_poll_group_000", 00:21:18.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:18.894 "listen_address": { 00:21:18.894 "trtype": "TCP", 00:21:18.894 "adrfam": "IPv4", 00:21:18.894 "traddr": "10.0.0.2", 00:21:18.894 "trsvcid": "4420" 00:21:18.894 }, 00:21:18.894 "peer_address": { 00:21:18.894 "trtype": "TCP", 00:21:18.894 "adrfam": "IPv4", 00:21:18.894 "traddr": "10.0.0.1", 00:21:18.894 "trsvcid": "41090" 00:21:18.894 }, 00:21:18.894 "auth": { 00:21:18.894 "state": "completed", 00:21:18.894 "digest": "sha256", 00:21:18.894 "dhgroup": "ffdhe2048" 00:21:18.894 } 00:21:18.894 } 00:21:18.894 ]' 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.894 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.156 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:19.156 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:19.726 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.986 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.245 00:21:20.245 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.245 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.245 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.505 { 00:21:20.505 "cntlid": 17, 00:21:20.505 "qid": 0, 00:21:20.505 "state": "enabled", 00:21:20.505 "thread": "nvmf_tgt_poll_group_000", 00:21:20.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:20.505 "listen_address": { 00:21:20.505 "trtype": "TCP", 00:21:20.505 "adrfam": "IPv4", 00:21:20.505 "traddr": "10.0.0.2", 00:21:20.505 "trsvcid": "4420" 00:21:20.505 }, 00:21:20.505 "peer_address": { 00:21:20.505 "trtype": "TCP", 00:21:20.505 "adrfam": "IPv4", 00:21:20.505 "traddr": "10.0.0.1", 00:21:20.505 "trsvcid": "41098" 00:21:20.505 }, 00:21:20.505 "auth": { 00:21:20.505 "state": "completed", 00:21:20.505 "digest": "sha256", 00:21:20.505 "dhgroup": "ffdhe3072" 00:21:20.505 } 00:21:20.505 } 00:21:20.505 ]' 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.505 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.764 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:20.764 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:21.334 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.593 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.853 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.853 { 00:21:21.853 "cntlid": 19, 00:21:21.853 "qid": 0, 00:21:21.853 "state": "enabled", 00:21:21.853 "thread": "nvmf_tgt_poll_group_000", 00:21:21.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:21.853 "listen_address": { 00:21:21.853 "trtype": "TCP", 00:21:21.853 "adrfam": "IPv4", 00:21:21.853 "traddr": "10.0.0.2", 00:21:21.853 "trsvcid": "4420" 00:21:21.853 }, 00:21:21.853 "peer_address": { 00:21:21.853 "trtype": "TCP", 00:21:21.853 "adrfam": "IPv4", 00:21:21.853 "traddr": "10.0.0.1", 00:21:21.853 "trsvcid": "51754" 00:21:21.853 }, 00:21:21.853 "auth": { 00:21:21.853 "state": "completed", 00:21:21.853 "digest": "sha256", 00:21:21.853 "dhgroup": "ffdhe3072" 00:21:21.853 } 00:21:21.853 } 00:21:21.853 ]' 00:21:21.853 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.115 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.115 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.115 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.115 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.115 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.115 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.115 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.377 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:22.377 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.946 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.205 00:21:23.205 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.205 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.205 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.464 { 00:21:23.464 "cntlid": 21, 00:21:23.464 "qid": 0, 00:21:23.464 "state": "enabled", 00:21:23.464 "thread": "nvmf_tgt_poll_group_000", 00:21:23.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:23.464 "listen_address": { 00:21:23.464 "trtype": "TCP", 00:21:23.464 "adrfam": "IPv4", 00:21:23.464 "traddr": "10.0.0.2", 00:21:23.464 "trsvcid": "4420" 00:21:23.464 }, 00:21:23.464 "peer_address": { 00:21:23.464 "trtype": "TCP", 00:21:23.464 "adrfam": "IPv4", 00:21:23.464 "traddr": "10.0.0.1", 00:21:23.464 "trsvcid": "51780" 00:21:23.464 }, 00:21:23.464 "auth": { 00:21:23.464 "state": "completed", 00:21:23.464 "digest": "sha256", 00:21:23.464 "dhgroup": "ffdhe3072" 00:21:23.464 } 00:21:23.464 } 00:21:23.464 ]' 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.464 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:23.724 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:24.293 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.552 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.553 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.553 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.553 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.553 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.811 00:21:24.811 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.812 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.812 12:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.071 { 00:21:25.071 "cntlid": 23, 00:21:25.071 "qid": 0, 00:21:25.071 "state": "enabled", 00:21:25.071 "thread": "nvmf_tgt_poll_group_000", 00:21:25.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:25.071 "listen_address": { 00:21:25.071 "trtype": "TCP", 00:21:25.071 "adrfam": "IPv4", 00:21:25.071 "traddr": "10.0.0.2", 00:21:25.071 "trsvcid": "4420" 00:21:25.071 }, 00:21:25.071 "peer_address": { 00:21:25.071 "trtype": "TCP", 00:21:25.071 "adrfam": "IPv4", 00:21:25.071 "traddr": "10.0.0.1", 00:21:25.071 "trsvcid": "51794" 00:21:25.071 }, 00:21:25.071 "auth": { 00:21:25.071 "state": "completed", 00:21:25.071 "digest": "sha256", 00:21:25.071 "dhgroup": "ffdhe3072" 00:21:25.071 } 00:21:25.071 } 00:21:25.071 ]' 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.071 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.330 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.330 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.330 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.330 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:25.330 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:25.897 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:25.898 12:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.157 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.417 00:21:26.417 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.417 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.417 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.677 { 00:21:26.677 "cntlid": 25, 00:21:26.677 "qid": 0, 00:21:26.677 "state": "enabled", 00:21:26.677 "thread": "nvmf_tgt_poll_group_000", 00:21:26.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:26.677 "listen_address": { 00:21:26.677 "trtype": "TCP", 00:21:26.677 "adrfam": "IPv4", 00:21:26.677 "traddr": "10.0.0.2", 00:21:26.677 "trsvcid": "4420" 00:21:26.677 }, 00:21:26.677 "peer_address": { 00:21:26.677 "trtype": "TCP", 00:21:26.677 "adrfam": "IPv4", 00:21:26.677 "traddr": "10.0.0.1", 00:21:26.677 "trsvcid": "51824" 00:21:26.677 }, 00:21:26.677 "auth": { 00:21:26.677 "state": "completed", 00:21:26.677 "digest": "sha256", 00:21:26.677 "dhgroup": "ffdhe4096" 00:21:26.677 } 00:21:26.677 } 00:21:26.677 ]' 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.677 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.936 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.936 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.936 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.936 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:26.937 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:27.505 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:27.506 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.765 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.025 00:21:28.025 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.025 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.025 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.284 { 00:21:28.284 "cntlid": 27, 00:21:28.284 "qid": 0, 00:21:28.284 "state": "enabled", 00:21:28.284 "thread": "nvmf_tgt_poll_group_000", 00:21:28.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:28.284 "listen_address": { 00:21:28.284 "trtype": "TCP", 00:21:28.284 "adrfam": "IPv4", 00:21:28.284 "traddr": "10.0.0.2", 00:21:28.284 "trsvcid": "4420" 00:21:28.284 }, 00:21:28.284 "peer_address": { 00:21:28.284 "trtype": "TCP", 00:21:28.284 "adrfam": "IPv4", 00:21:28.284 "traddr": "10.0.0.1", 00:21:28.284 "trsvcid": "51850" 00:21:28.284 }, 00:21:28.284 "auth": { 00:21:28.284 "state": "completed", 00:21:28.284 "digest": "sha256", 00:21:28.284 "dhgroup": "ffdhe4096" 00:21:28.284 } 00:21:28.284 } 00:21:28.284 ]' 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.284 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.544 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.544 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.544 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.544 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:28.544 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:29.112 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:29.113 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.372 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.631 00:21:29.631 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.631 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.631 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.891 { 00:21:29.891 "cntlid": 29, 00:21:29.891 "qid": 0, 00:21:29.891 "state": "enabled", 00:21:29.891 "thread": "nvmf_tgt_poll_group_000", 00:21:29.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:29.891 "listen_address": { 00:21:29.891 "trtype": "TCP", 00:21:29.891 "adrfam": "IPv4", 00:21:29.891 "traddr": "10.0.0.2", 00:21:29.891 "trsvcid": "4420" 00:21:29.891 }, 00:21:29.891 "peer_address": { 00:21:29.891 "trtype": "TCP", 00:21:29.891 "adrfam": "IPv4", 00:21:29.891 "traddr": "10.0.0.1", 00:21:29.891 "trsvcid": "51874" 00:21:29.891 }, 00:21:29.891 "auth": { 00:21:29.891 "state": "completed", 00:21:29.891 "digest": "sha256", 00:21:29.891 "dhgroup": "ffdhe4096" 00:21:29.891 } 00:21:29.891 } 00:21:29.891 ]' 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.891 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.150 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.150 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.150 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.150 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:30.150 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:30.720 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:30.979 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.980 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.980 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.980 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.980 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.980 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.239 00:21:31.239 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.239 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.239 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.498 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.499 { 00:21:31.499 "cntlid": 31, 00:21:31.499 "qid": 0, 00:21:31.499 "state": "enabled", 00:21:31.499 "thread": "nvmf_tgt_poll_group_000", 00:21:31.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:31.499 "listen_address": { 00:21:31.499 "trtype": "TCP", 00:21:31.499 "adrfam": "IPv4", 00:21:31.499 "traddr": "10.0.0.2", 00:21:31.499 "trsvcid": "4420" 00:21:31.499 }, 00:21:31.499 "peer_address": { 00:21:31.499 "trtype": "TCP", 00:21:31.499 "adrfam": "IPv4", 00:21:31.499 "traddr": "10.0.0.1", 00:21:31.499 "trsvcid": "37270" 00:21:31.499 }, 00:21:31.499 "auth": { 00:21:31.499 "state": "completed", 00:21:31.499 "digest": "sha256", 00:21:31.499 "dhgroup": "ffdhe4096" 00:21:31.499 } 00:21:31.499 } 00:21:31.499 ]' 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.499 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.758 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:31.758 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:32.327 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.587 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.846 00:21:32.846 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.846 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.846 12:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.106 { 00:21:33.106 "cntlid": 33, 00:21:33.106 "qid": 0, 00:21:33.106 "state": "enabled", 00:21:33.106 "thread": "nvmf_tgt_poll_group_000", 00:21:33.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:33.106 "listen_address": { 00:21:33.106 "trtype": "TCP", 00:21:33.106 "adrfam": "IPv4", 00:21:33.106 "traddr": "10.0.0.2", 00:21:33.106 "trsvcid": "4420" 00:21:33.106 }, 00:21:33.106 "peer_address": { 00:21:33.106 "trtype": "TCP", 00:21:33.106 "adrfam": "IPv4", 00:21:33.106 "traddr": "10.0.0.1", 00:21:33.106 "trsvcid": "37300" 00:21:33.106 }, 00:21:33.106 "auth": { 00:21:33.106 "state": "completed", 00:21:33.106 "digest": "sha256", 00:21:33.106 "dhgroup": "ffdhe6144" 00:21:33.106 } 00:21:33.106 } 00:21:33.106 ]' 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.106 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:33.366 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:33.934 12:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.193 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:34.193 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.194 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.763 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.763 { 00:21:34.763 "cntlid": 35, 00:21:34.763 "qid": 0, 00:21:34.763 "state": "enabled", 00:21:34.763 "thread": "nvmf_tgt_poll_group_000", 00:21:34.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:34.763 "listen_address": { 00:21:34.763 "trtype": "TCP", 00:21:34.763 "adrfam": "IPv4", 00:21:34.763 "traddr": "10.0.0.2", 00:21:34.763 "trsvcid": "4420" 00:21:34.763 }, 00:21:34.763 "peer_address": { 00:21:34.763 "trtype": "TCP", 00:21:34.763 "adrfam": "IPv4", 00:21:34.763 "traddr": "10.0.0.1", 00:21:34.763 "trsvcid": "37320" 00:21:34.763 }, 00:21:34.763 "auth": { 00:21:34.763 "state": "completed", 00:21:34.763 "digest": "sha256", 00:21:34.763 "dhgroup": "ffdhe6144" 00:21:34.763 } 00:21:34.763 } 00:21:34.763 ]' 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:34.763 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.023 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.023 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.023 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.023 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.023 12:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.282 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:35.282 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.851 12:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.419 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.419 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.419 { 00:21:36.419 "cntlid": 37, 00:21:36.420 "qid": 0, 00:21:36.420 "state": "enabled", 00:21:36.420 "thread": "nvmf_tgt_poll_group_000", 00:21:36.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:36.420 "listen_address": { 00:21:36.420 "trtype": "TCP", 00:21:36.420 "adrfam": "IPv4", 00:21:36.420 "traddr": "10.0.0.2", 00:21:36.420 "trsvcid": "4420" 00:21:36.420 }, 00:21:36.420 "peer_address": { 00:21:36.420 "trtype": "TCP", 00:21:36.420 "adrfam": "IPv4", 00:21:36.420 "traddr": "10.0.0.1", 00:21:36.420 "trsvcid": "37344" 00:21:36.420 }, 00:21:36.420 "auth": { 00:21:36.420 "state": "completed", 00:21:36.420 "digest": "sha256", 00:21:36.420 "dhgroup": "ffdhe6144" 00:21:36.420 } 00:21:36.420 } 00:21:36.420 ]' 00:21:36.420 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.420 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.420 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:36.679 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:37.247 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:37.506 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:37.506 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.506 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.506 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.506 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.506 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.507 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.767 00:21:38.026 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.026 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.026 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.026 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.026 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.027 { 00:21:38.027 "cntlid": 39, 00:21:38.027 "qid": 0, 00:21:38.027 "state": "enabled", 00:21:38.027 "thread": "nvmf_tgt_poll_group_000", 00:21:38.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:38.027 "listen_address": { 00:21:38.027 "trtype": "TCP", 00:21:38.027 "adrfam": "IPv4", 00:21:38.027 "traddr": "10.0.0.2", 00:21:38.027 "trsvcid": "4420" 00:21:38.027 }, 00:21:38.027 "peer_address": { 00:21:38.027 "trtype": "TCP", 00:21:38.027 "adrfam": "IPv4", 00:21:38.027 "traddr": "10.0.0.1", 00:21:38.027 "trsvcid": "37368" 00:21:38.027 }, 00:21:38.027 "auth": { 00:21:38.027 "state": "completed", 00:21:38.027 "digest": "sha256", 00:21:38.027 "dhgroup": "ffdhe6144" 00:21:38.027 } 00:21:38.027 } 00:21:38.027 ]' 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.027 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.286 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.286 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.286 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.286 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.286 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.546 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:38.546 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:39.114 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:39.115 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.115 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.683 00:21:39.683 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.683 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.683 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.942 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.942 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.942 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.942 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.942 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.942 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.942 { 00:21:39.942 "cntlid": 41, 00:21:39.942 "qid": 0, 00:21:39.942 "state": "enabled", 00:21:39.942 "thread": "nvmf_tgt_poll_group_000", 00:21:39.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:39.942 "listen_address": { 00:21:39.942 "trtype": "TCP", 00:21:39.942 "adrfam": "IPv4", 00:21:39.942 "traddr": "10.0.0.2", 00:21:39.942 "trsvcid": "4420" 00:21:39.942 }, 00:21:39.942 "peer_address": { 00:21:39.942 "trtype": "TCP", 00:21:39.942 "adrfam": "IPv4", 00:21:39.942 "traddr": "10.0.0.1", 00:21:39.942 "trsvcid": "37398" 00:21:39.942 }, 00:21:39.942 "auth": { 00:21:39.942 "state": "completed", 00:21:39.942 "digest": "sha256", 00:21:39.942 "dhgroup": "ffdhe8192" 00:21:39.942 } 00:21:39.942 } 00:21:39.942 ]' 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.943 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.202 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:40.202 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:40.772 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:41.031 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:41.031 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.031 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.031 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.031 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:41.031 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.032 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.601 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.601 { 00:21:41.601 "cntlid": 43, 00:21:41.601 "qid": 0, 00:21:41.601 "state": "enabled", 00:21:41.601 "thread": "nvmf_tgt_poll_group_000", 00:21:41.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:41.601 "listen_address": { 00:21:41.601 "trtype": "TCP", 00:21:41.601 "adrfam": "IPv4", 00:21:41.601 "traddr": "10.0.0.2", 00:21:41.601 "trsvcid": "4420" 00:21:41.601 }, 00:21:41.601 "peer_address": { 00:21:41.601 "trtype": "TCP", 00:21:41.601 "adrfam": "IPv4", 00:21:41.601 "traddr": "10.0.0.1", 00:21:41.601 "trsvcid": "38522" 00:21:41.601 }, 00:21:41.601 "auth": { 00:21:41.601 "state": "completed", 00:21:41.601 "digest": "sha256", 00:21:41.601 "dhgroup": "ffdhe8192" 00:21:41.601 } 00:21:41.601 } 00:21:41.601 ]' 00:21:41.601 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.861 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.119 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:42.119 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.688 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.258 00:21:43.258 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.258 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.258 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.517 { 00:21:43.517 "cntlid": 45, 00:21:43.517 "qid": 0, 00:21:43.517 "state": "enabled", 00:21:43.517 "thread": "nvmf_tgt_poll_group_000", 00:21:43.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:43.517 "listen_address": { 00:21:43.517 "trtype": "TCP", 00:21:43.517 "adrfam": "IPv4", 00:21:43.517 "traddr": "10.0.0.2", 00:21:43.517 "trsvcid": "4420" 00:21:43.517 }, 00:21:43.517 "peer_address": { 00:21:43.517 "trtype": "TCP", 00:21:43.517 "adrfam": "IPv4", 00:21:43.517 "traddr": "10.0.0.1", 00:21:43.517 "trsvcid": "38554" 00:21:43.517 }, 00:21:43.517 "auth": { 00:21:43.517 "state": "completed", 00:21:43.517 "digest": "sha256", 00:21:43.517 "dhgroup": "ffdhe8192" 00:21:43.517 } 00:21:43.517 } 00:21:43.517 ]' 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.517 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.776 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:43.776 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:44.344 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.603 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.171 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.171 { 00:21:45.171 "cntlid": 47, 00:21:45.171 "qid": 0, 00:21:45.171 "state": "enabled", 00:21:45.171 "thread": "nvmf_tgt_poll_group_000", 00:21:45.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:45.171 "listen_address": { 00:21:45.171 "trtype": "TCP", 00:21:45.171 "adrfam": "IPv4", 00:21:45.171 "traddr": "10.0.0.2", 00:21:45.171 "trsvcid": "4420" 00:21:45.171 }, 00:21:45.171 "peer_address": { 00:21:45.171 "trtype": "TCP", 00:21:45.171 "adrfam": "IPv4", 00:21:45.171 "traddr": "10.0.0.1", 00:21:45.171 "trsvcid": "38590" 00:21:45.171 }, 00:21:45.171 "auth": { 00:21:45.171 "state": "completed", 00:21:45.171 "digest": "sha256", 00:21:45.171 "dhgroup": "ffdhe8192" 00:21:45.171 } 00:21:45.171 } 00:21:45.171 ]' 00:21:45.171 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.432 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.692 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:45.692 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.261 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.520 00:21:46.520 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.520 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.520 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.779 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.779 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.779 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.779 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.779 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.779 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.779 { 00:21:46.779 "cntlid": 49, 00:21:46.780 "qid": 0, 00:21:46.780 "state": "enabled", 00:21:46.780 "thread": "nvmf_tgt_poll_group_000", 00:21:46.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:46.780 "listen_address": { 00:21:46.780 "trtype": "TCP", 00:21:46.780 "adrfam": "IPv4", 00:21:46.780 "traddr": "10.0.0.2", 00:21:46.780 "trsvcid": "4420" 00:21:46.780 }, 00:21:46.780 "peer_address": { 00:21:46.780 "trtype": "TCP", 00:21:46.780 "adrfam": "IPv4", 00:21:46.780 "traddr": "10.0.0.1", 00:21:46.780 "trsvcid": "38612" 00:21:46.780 }, 00:21:46.780 "auth": { 00:21:46.780 "state": "completed", 00:21:46.780 "digest": "sha384", 00:21:46.780 "dhgroup": "null" 00:21:46.780 } 00:21:46.780 } 00:21:46.780 ]' 00:21:46.780 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.780 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.780 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.038 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.038 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.038 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.038 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.039 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.299 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:47.299 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.868 12:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.127 00:21:48.127 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.127 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.127 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.387 { 00:21:48.387 "cntlid": 51, 00:21:48.387 "qid": 0, 00:21:48.387 "state": "enabled", 00:21:48.387 "thread": "nvmf_tgt_poll_group_000", 00:21:48.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:48.387 "listen_address": { 00:21:48.387 "trtype": "TCP", 00:21:48.387 "adrfam": "IPv4", 00:21:48.387 "traddr": "10.0.0.2", 00:21:48.387 "trsvcid": "4420" 00:21:48.387 }, 00:21:48.387 "peer_address": { 00:21:48.387 "trtype": "TCP", 00:21:48.387 "adrfam": "IPv4", 00:21:48.387 "traddr": "10.0.0.1", 00:21:48.387 "trsvcid": "38638" 00:21:48.387 }, 00:21:48.387 "auth": { 00:21:48.387 "state": "completed", 00:21:48.387 "digest": "sha384", 00:21:48.387 "dhgroup": "null" 00:21:48.387 } 00:21:48.387 } 00:21:48.387 ]' 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:48.387 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.646 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.646 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.646 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.646 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:48.646 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:49.214 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.473 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.732 00:21:49.732 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.732 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.732 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.992 { 00:21:49.992 "cntlid": 53, 00:21:49.992 "qid": 0, 00:21:49.992 "state": "enabled", 00:21:49.992 "thread": "nvmf_tgt_poll_group_000", 00:21:49.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:49.992 "listen_address": { 00:21:49.992 "trtype": "TCP", 00:21:49.992 "adrfam": "IPv4", 00:21:49.992 "traddr": "10.0.0.2", 00:21:49.992 "trsvcid": "4420" 00:21:49.992 }, 00:21:49.992 "peer_address": { 00:21:49.992 "trtype": "TCP", 00:21:49.992 "adrfam": "IPv4", 00:21:49.992 "traddr": "10.0.0.1", 00:21:49.992 "trsvcid": "38660" 00:21:49.992 }, 00:21:49.992 "auth": { 00:21:49.992 "state": "completed", 00:21:49.992 "digest": "sha384", 00:21:49.992 "dhgroup": "null" 00:21:49.992 } 00:21:49.992 } 00:21:49.992 ]' 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.992 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.992 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:49.992 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.992 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.992 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.992 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.251 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:50.251 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:50.828 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.087 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.347 00:21:51.347 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.347 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.347 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.607 { 00:21:51.607 "cntlid": 55, 00:21:51.607 "qid": 0, 00:21:51.607 "state": "enabled", 00:21:51.607 "thread": "nvmf_tgt_poll_group_000", 00:21:51.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:51.607 "listen_address": { 00:21:51.607 "trtype": "TCP", 00:21:51.607 "adrfam": "IPv4", 00:21:51.607 "traddr": "10.0.0.2", 00:21:51.607 "trsvcid": "4420" 00:21:51.607 }, 00:21:51.607 "peer_address": { 00:21:51.607 "trtype": "TCP", 00:21:51.607 "adrfam": "IPv4", 00:21:51.607 "traddr": "10.0.0.1", 00:21:51.607 "trsvcid": "41800" 00:21:51.607 }, 00:21:51.607 "auth": { 00:21:51.607 "state": "completed", 00:21:51.607 "digest": "sha384", 00:21:51.607 "dhgroup": "null" 00:21:51.607 } 00:21:51.607 } 00:21:51.607 ]' 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.607 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.866 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:51.866 12:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.435 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.695 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.954 00:21:52.954 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.954 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.954 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.214 { 00:21:53.214 "cntlid": 57, 00:21:53.214 "qid": 0, 00:21:53.214 "state": "enabled", 00:21:53.214 "thread": "nvmf_tgt_poll_group_000", 00:21:53.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:53.214 "listen_address": { 00:21:53.214 "trtype": "TCP", 00:21:53.214 "adrfam": "IPv4", 00:21:53.214 "traddr": "10.0.0.2", 00:21:53.214 "trsvcid": "4420" 00:21:53.214 }, 00:21:53.214 "peer_address": { 00:21:53.214 "trtype": "TCP", 00:21:53.214 "adrfam": "IPv4", 00:21:53.214 "traddr": "10.0.0.1", 00:21:53.214 "trsvcid": "41832" 00:21:53.214 }, 00:21:53.214 "auth": { 00:21:53.214 "state": "completed", 00:21:53.214 "digest": "sha384", 00:21:53.214 "dhgroup": "ffdhe2048" 00:21:53.214 } 00:21:53.214 } 00:21:53.214 ]' 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.214 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.473 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:53.473 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:54.042 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.302 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.562 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.562 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.822 { 00:21:54.822 "cntlid": 59, 00:21:54.822 "qid": 0, 00:21:54.822 "state": "enabled", 00:21:54.822 "thread": "nvmf_tgt_poll_group_000", 00:21:54.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:54.822 "listen_address": { 00:21:54.822 "trtype": "TCP", 00:21:54.822 "adrfam": "IPv4", 00:21:54.822 "traddr": "10.0.0.2", 00:21:54.822 "trsvcid": "4420" 00:21:54.822 }, 00:21:54.822 "peer_address": { 00:21:54.822 "trtype": "TCP", 00:21:54.822 "adrfam": "IPv4", 00:21:54.822 "traddr": "10.0.0.1", 00:21:54.822 "trsvcid": "41856" 00:21:54.822 }, 00:21:54.822 "auth": { 00:21:54.822 "state": "completed", 00:21:54.822 "digest": "sha384", 00:21:54.822 "dhgroup": "ffdhe2048" 00:21:54.822 } 00:21:54.822 } 00:21:54.822 ]' 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.822 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.081 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:55.081 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.651 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.910 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:55.910 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.910 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.911 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.911 00:21:56.170 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.170 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.170 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.170 { 00:21:56.170 "cntlid": 61, 00:21:56.170 "qid": 0, 00:21:56.170 "state": "enabled", 00:21:56.170 "thread": "nvmf_tgt_poll_group_000", 00:21:56.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:56.170 "listen_address": { 00:21:56.170 "trtype": "TCP", 00:21:56.170 "adrfam": "IPv4", 00:21:56.170 "traddr": "10.0.0.2", 00:21:56.170 "trsvcid": "4420" 00:21:56.170 }, 00:21:56.170 "peer_address": { 00:21:56.170 "trtype": "TCP", 00:21:56.170 "adrfam": "IPv4", 00:21:56.170 "traddr": "10.0.0.1", 00:21:56.170 "trsvcid": "41872" 00:21:56.170 }, 00:21:56.170 "auth": { 00:21:56.170 "state": "completed", 00:21:56.170 "digest": "sha384", 00:21:56.170 "dhgroup": "ffdhe2048" 00:21:56.170 } 00:21:56.170 } 00:21:56.170 ]' 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.170 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.429 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.429 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.429 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.429 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.429 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.688 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:56.688 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.258 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.517 00:21:57.517 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.517 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.517 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.776 { 00:21:57.776 "cntlid": 63, 00:21:57.776 "qid": 0, 00:21:57.776 "state": "enabled", 00:21:57.776 "thread": "nvmf_tgt_poll_group_000", 00:21:57.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:57.776 "listen_address": { 00:21:57.776 "trtype": "TCP", 00:21:57.776 "adrfam": "IPv4", 00:21:57.776 "traddr": "10.0.0.2", 00:21:57.776 "trsvcid": "4420" 00:21:57.776 }, 00:21:57.776 "peer_address": { 00:21:57.776 "trtype": "TCP", 00:21:57.776 "adrfam": "IPv4", 00:21:57.776 "traddr": "10.0.0.1", 00:21:57.776 "trsvcid": "41908" 00:21:57.776 }, 00:21:57.776 "auth": { 00:21:57.776 "state": "completed", 00:21:57.776 "digest": "sha384", 00:21:57.776 "dhgroup": "ffdhe2048" 00:21:57.776 } 00:21:57.776 } 00:21:57.776 ]' 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.776 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.777 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.777 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.777 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.036 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.036 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.036 12:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.036 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:58.036 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:58.605 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.865 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.124 00:21:59.124 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.124 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.124 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.383 { 00:21:59.383 "cntlid": 65, 00:21:59.383 "qid": 0, 00:21:59.383 "state": "enabled", 00:21:59.383 "thread": "nvmf_tgt_poll_group_000", 00:21:59.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:59.383 "listen_address": { 00:21:59.383 "trtype": "TCP", 00:21:59.383 "adrfam": "IPv4", 00:21:59.383 "traddr": "10.0.0.2", 00:21:59.383 "trsvcid": "4420" 00:21:59.383 }, 00:21:59.383 "peer_address": { 00:21:59.383 "trtype": "TCP", 00:21:59.383 "adrfam": "IPv4", 00:21:59.383 "traddr": "10.0.0.1", 00:21:59.383 "trsvcid": "41930" 00:21:59.383 }, 00:21:59.383 "auth": { 00:21:59.383 "state": "completed", 00:21:59.383 "digest": "sha384", 00:21:59.383 "dhgroup": "ffdhe3072" 00:21:59.383 } 00:21:59.383 } 00:21:59.383 ]' 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.383 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.642 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:21:59.642 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:00.211 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:00.470 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.471 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.730 00:22:00.730 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.730 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.730 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.989 { 00:22:00.989 "cntlid": 67, 00:22:00.989 "qid": 0, 00:22:00.989 "state": "enabled", 00:22:00.989 "thread": "nvmf_tgt_poll_group_000", 00:22:00.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:00.989 "listen_address": { 00:22:00.989 "trtype": "TCP", 00:22:00.989 "adrfam": "IPv4", 00:22:00.989 "traddr": "10.0.0.2", 00:22:00.989 "trsvcid": "4420" 00:22:00.989 }, 00:22:00.989 "peer_address": { 00:22:00.989 "trtype": "TCP", 00:22:00.989 "adrfam": "IPv4", 00:22:00.989 "traddr": "10.0.0.1", 00:22:00.989 "trsvcid": "41956" 00:22:00.989 }, 00:22:00.989 "auth": { 00:22:00.989 "state": "completed", 00:22:00.989 "digest": "sha384", 00:22:00.989 "dhgroup": "ffdhe3072" 00:22:00.989 } 00:22:00.989 } 00:22:00.989 ]' 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.989 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.989 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.989 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.989 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.248 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:01.248 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:01.816 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.076 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.335 00:22:02.335 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.335 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.335 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.594 { 00:22:02.594 "cntlid": 69, 00:22:02.594 "qid": 0, 00:22:02.594 "state": "enabled", 00:22:02.594 "thread": "nvmf_tgt_poll_group_000", 00:22:02.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:02.594 "listen_address": { 00:22:02.594 "trtype": "TCP", 00:22:02.594 "adrfam": "IPv4", 00:22:02.594 "traddr": "10.0.0.2", 00:22:02.594 "trsvcid": "4420" 00:22:02.594 }, 00:22:02.594 "peer_address": { 00:22:02.594 "trtype": "TCP", 00:22:02.594 "adrfam": "IPv4", 00:22:02.594 "traddr": "10.0.0.1", 00:22:02.594 "trsvcid": "55530" 00:22:02.594 }, 00:22:02.594 "auth": { 00:22:02.594 "state": "completed", 00:22:02.594 "digest": "sha384", 00:22:02.594 "dhgroup": "ffdhe3072" 00:22:02.594 } 00:22:02.594 } 00:22:02.594 ]' 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.594 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.853 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:02.853 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:03.421 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.681 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.941 00:22:03.941 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.941 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.941 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.201 { 00:22:04.201 "cntlid": 71, 00:22:04.201 "qid": 0, 00:22:04.201 "state": "enabled", 00:22:04.201 "thread": "nvmf_tgt_poll_group_000", 00:22:04.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:04.201 "listen_address": { 00:22:04.201 "trtype": "TCP", 00:22:04.201 "adrfam": "IPv4", 00:22:04.201 "traddr": "10.0.0.2", 00:22:04.201 "trsvcid": "4420" 00:22:04.201 }, 00:22:04.201 "peer_address": { 00:22:04.201 "trtype": "TCP", 00:22:04.201 "adrfam": "IPv4", 00:22:04.201 "traddr": "10.0.0.1", 00:22:04.201 "trsvcid": "55566" 00:22:04.201 }, 00:22:04.201 "auth": { 00:22:04.201 "state": "completed", 00:22:04.201 "digest": "sha384", 00:22:04.201 "dhgroup": "ffdhe3072" 00:22:04.201 } 00:22:04.201 } 00:22:04.201 ]' 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.201 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.202 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.202 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.202 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.461 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:04.461 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:05.030 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.030 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:05.030 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.030 12:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.030 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.030 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.030 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.030 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.031 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.290 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.550 00:22:05.550 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.550 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.550 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.810 { 00:22:05.810 "cntlid": 73, 00:22:05.810 "qid": 0, 00:22:05.810 "state": "enabled", 00:22:05.810 "thread": "nvmf_tgt_poll_group_000", 00:22:05.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:05.810 "listen_address": { 00:22:05.810 "trtype": "TCP", 00:22:05.810 "adrfam": "IPv4", 00:22:05.810 "traddr": "10.0.0.2", 00:22:05.810 "trsvcid": "4420" 00:22:05.810 }, 00:22:05.810 "peer_address": { 00:22:05.810 "trtype": "TCP", 00:22:05.810 "adrfam": "IPv4", 00:22:05.810 "traddr": "10.0.0.1", 00:22:05.810 "trsvcid": "55586" 00:22:05.810 }, 00:22:05.810 "auth": { 00:22:05.810 "state": "completed", 00:22:05.810 "digest": "sha384", 00:22:05.810 "dhgroup": "ffdhe4096" 00:22:05.810 } 00:22:05.810 } 00:22:05.810 ]' 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.810 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.070 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:06.070 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:06.639 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:06.898 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:06.898 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.898 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:06.898 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:06.898 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.899 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.158 00:22:07.158 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.158 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.158 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.418 { 00:22:07.418 "cntlid": 75, 00:22:07.418 "qid": 0, 00:22:07.418 "state": "enabled", 00:22:07.418 "thread": "nvmf_tgt_poll_group_000", 00:22:07.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:07.418 "listen_address": { 00:22:07.418 "trtype": "TCP", 00:22:07.418 "adrfam": "IPv4", 00:22:07.418 "traddr": "10.0.0.2", 00:22:07.418 "trsvcid": "4420" 00:22:07.418 }, 00:22:07.418 "peer_address": { 00:22:07.418 "trtype": "TCP", 00:22:07.418 "adrfam": "IPv4", 00:22:07.418 "traddr": "10.0.0.1", 00:22:07.418 "trsvcid": "55594" 00:22:07.418 }, 00:22:07.418 "auth": { 00:22:07.418 "state": "completed", 00:22:07.418 "digest": "sha384", 00:22:07.418 "dhgroup": "ffdhe4096" 00:22:07.418 } 00:22:07.418 } 00:22:07.418 ]' 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.418 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.677 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:07.677 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:08.247 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.507 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.767 00:22:08.767 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.767 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.767 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.026 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.026 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.026 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.026 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.026 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.026 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.026 { 00:22:09.026 "cntlid": 77, 00:22:09.026 "qid": 0, 00:22:09.026 "state": "enabled", 00:22:09.026 "thread": "nvmf_tgt_poll_group_000", 00:22:09.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:09.027 "listen_address": { 00:22:09.027 "trtype": "TCP", 00:22:09.027 "adrfam": "IPv4", 00:22:09.027 "traddr": "10.0.0.2", 00:22:09.027 "trsvcid": "4420" 00:22:09.027 }, 00:22:09.027 "peer_address": { 00:22:09.027 "trtype": "TCP", 00:22:09.027 "adrfam": "IPv4", 00:22:09.027 "traddr": "10.0.0.1", 00:22:09.027 "trsvcid": "55618" 00:22:09.027 }, 00:22:09.027 "auth": { 00:22:09.027 "state": "completed", 00:22:09.027 "digest": "sha384", 00:22:09.027 "dhgroup": "ffdhe4096" 00:22:09.027 } 00:22:09.027 } 00:22:09.027 ]' 00:22:09.027 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.027 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.027 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.027 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.027 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.027 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.027 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.027 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.285 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:09.285 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:09.853 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.113 12:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.373 00:22:10.373 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.373 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.373 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.633 { 00:22:10.633 "cntlid": 79, 00:22:10.633 "qid": 0, 00:22:10.633 "state": "enabled", 00:22:10.633 "thread": "nvmf_tgt_poll_group_000", 00:22:10.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:10.633 "listen_address": { 00:22:10.633 "trtype": "TCP", 00:22:10.633 "adrfam": "IPv4", 00:22:10.633 "traddr": "10.0.0.2", 00:22:10.633 "trsvcid": "4420" 00:22:10.633 }, 00:22:10.633 "peer_address": { 00:22:10.633 "trtype": "TCP", 00:22:10.633 "adrfam": "IPv4", 00:22:10.633 "traddr": "10.0.0.1", 00:22:10.633 "trsvcid": "55648" 00:22:10.633 }, 00:22:10.633 "auth": { 00:22:10.633 "state": "completed", 00:22:10.633 "digest": "sha384", 00:22:10.633 "dhgroup": "ffdhe4096" 00:22:10.633 } 00:22:10.633 } 00:22:10.633 ]' 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.633 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.892 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:10.892 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.461 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.981 00:22:11.981 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.981 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.981 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.240 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.241 { 00:22:12.241 "cntlid": 81, 00:22:12.241 "qid": 0, 00:22:12.241 "state": "enabled", 00:22:12.241 "thread": "nvmf_tgt_poll_group_000", 00:22:12.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:12.241 "listen_address": { 00:22:12.241 "trtype": "TCP", 00:22:12.241 "adrfam": "IPv4", 00:22:12.241 "traddr": "10.0.0.2", 00:22:12.241 "trsvcid": "4420" 00:22:12.241 }, 00:22:12.241 "peer_address": { 00:22:12.241 "trtype": "TCP", 00:22:12.241 "adrfam": "IPv4", 00:22:12.241 "traddr": "10.0.0.1", 00:22:12.241 "trsvcid": "42908" 00:22:12.241 }, 00:22:12.241 "auth": { 00:22:12.241 "state": "completed", 00:22:12.241 "digest": "sha384", 00:22:12.241 "dhgroup": "ffdhe6144" 00:22:12.241 } 00:22:12.241 } 00:22:12.241 ]' 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.241 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.500 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:12.500 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:13.069 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.069 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.328 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.588 00:22:13.588 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.588 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.588 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.847 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.847 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.848 { 00:22:13.848 "cntlid": 83, 00:22:13.848 "qid": 0, 00:22:13.848 "state": "enabled", 00:22:13.848 "thread": "nvmf_tgt_poll_group_000", 00:22:13.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:13.848 "listen_address": { 00:22:13.848 "trtype": "TCP", 00:22:13.848 "adrfam": "IPv4", 00:22:13.848 "traddr": "10.0.0.2", 00:22:13.848 "trsvcid": "4420" 00:22:13.848 }, 00:22:13.848 "peer_address": { 00:22:13.848 "trtype": "TCP", 00:22:13.848 "adrfam": "IPv4", 00:22:13.848 "traddr": "10.0.0.1", 00:22:13.848 "trsvcid": "42938" 00:22:13.848 }, 00:22:13.848 "auth": { 00:22:13.848 "state": "completed", 00:22:13.848 "digest": "sha384", 00:22:13.848 "dhgroup": "ffdhe6144" 00:22:13.848 } 00:22:13.848 } 00:22:13.848 ]' 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.848 12:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.107 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:14.107 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.676 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.936 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.196 00:22:15.455 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.455 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.455 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.455 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.455 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.456 { 00:22:15.456 "cntlid": 85, 00:22:15.456 "qid": 0, 00:22:15.456 "state": "enabled", 00:22:15.456 "thread": "nvmf_tgt_poll_group_000", 00:22:15.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:15.456 "listen_address": { 00:22:15.456 "trtype": "TCP", 00:22:15.456 "adrfam": "IPv4", 00:22:15.456 "traddr": "10.0.0.2", 00:22:15.456 "trsvcid": "4420" 00:22:15.456 }, 00:22:15.456 "peer_address": { 00:22:15.456 "trtype": "TCP", 00:22:15.456 "adrfam": "IPv4", 00:22:15.456 "traddr": "10.0.0.1", 00:22:15.456 "trsvcid": "42964" 00:22:15.456 }, 00:22:15.456 "auth": { 00:22:15.456 "state": "completed", 00:22:15.456 "digest": "sha384", 00:22:15.456 "dhgroup": "ffdhe6144" 00:22:15.456 } 00:22:15.456 } 00:22:15.456 ]' 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.456 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.715 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.715 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.715 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.715 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.715 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.974 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:15.974 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.543 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.802 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.802 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.802 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.803 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.062 00:22:17.062 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.062 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.062 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.322 { 00:22:17.322 "cntlid": 87, 00:22:17.322 "qid": 0, 00:22:17.322 "state": "enabled", 00:22:17.322 "thread": "nvmf_tgt_poll_group_000", 00:22:17.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:17.322 "listen_address": { 00:22:17.322 "trtype": "TCP", 00:22:17.322 "adrfam": "IPv4", 00:22:17.322 "traddr": "10.0.0.2", 00:22:17.322 "trsvcid": "4420" 00:22:17.322 }, 00:22:17.322 "peer_address": { 00:22:17.322 "trtype": "TCP", 00:22:17.322 "adrfam": "IPv4", 00:22:17.322 "traddr": "10.0.0.1", 00:22:17.322 "trsvcid": "42994" 00:22:17.322 }, 00:22:17.322 "auth": { 00:22:17.322 "state": "completed", 00:22:17.322 "digest": "sha384", 00:22:17.322 "dhgroup": "ffdhe6144" 00:22:17.322 } 00:22:17.322 } 00:22:17.322 ]' 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.322 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.582 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:17.582 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.151 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.411 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.979 00:22:18.979 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.979 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.979 12:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.979 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.979 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.979 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.979 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.979 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.239 { 00:22:19.239 "cntlid": 89, 00:22:19.239 "qid": 0, 00:22:19.239 "state": "enabled", 00:22:19.239 "thread": "nvmf_tgt_poll_group_000", 00:22:19.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:19.239 "listen_address": { 00:22:19.239 "trtype": "TCP", 00:22:19.239 "adrfam": "IPv4", 00:22:19.239 "traddr": "10.0.0.2", 00:22:19.239 "trsvcid": "4420" 00:22:19.239 }, 00:22:19.239 "peer_address": { 00:22:19.239 "trtype": "TCP", 00:22:19.239 "adrfam": "IPv4", 00:22:19.239 "traddr": "10.0.0.1", 00:22:19.239 "trsvcid": "43024" 00:22:19.239 }, 00:22:19.239 "auth": { 00:22:19.239 "state": "completed", 00:22:19.239 "digest": "sha384", 00:22:19.239 "dhgroup": "ffdhe8192" 00:22:19.239 } 00:22:19.239 } 00:22:19.239 ]' 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.239 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.498 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:19.498 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.067 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.327 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.586 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.845 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.845 { 00:22:20.845 "cntlid": 91, 00:22:20.845 "qid": 0, 00:22:20.845 "state": "enabled", 00:22:20.845 "thread": "nvmf_tgt_poll_group_000", 00:22:20.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:20.845 "listen_address": { 00:22:20.845 "trtype": "TCP", 00:22:20.845 "adrfam": "IPv4", 00:22:20.845 "traddr": "10.0.0.2", 00:22:20.845 "trsvcid": "4420" 00:22:20.845 }, 00:22:20.845 "peer_address": { 00:22:20.845 "trtype": "TCP", 00:22:20.845 "adrfam": "IPv4", 00:22:20.845 "traddr": "10.0.0.1", 00:22:20.845 "trsvcid": "43050" 00:22:20.845 }, 00:22:20.845 "auth": { 00:22:20.845 "state": "completed", 00:22:20.845 "digest": "sha384", 00:22:20.845 "dhgroup": "ffdhe8192" 00:22:20.845 } 00:22:20.845 } 00:22:20.845 ]' 00:22:20.846 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.104 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.104 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.104 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.105 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.105 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.105 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.105 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.364 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:21.364 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.931 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.190 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.449 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.709 { 00:22:22.709 "cntlid": 93, 00:22:22.709 "qid": 0, 00:22:22.709 "state": "enabled", 00:22:22.709 "thread": "nvmf_tgt_poll_group_000", 00:22:22.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:22.709 "listen_address": { 00:22:22.709 "trtype": "TCP", 00:22:22.709 "adrfam": "IPv4", 00:22:22.709 "traddr": "10.0.0.2", 00:22:22.709 "trsvcid": "4420" 00:22:22.709 }, 00:22:22.709 "peer_address": { 00:22:22.709 "trtype": "TCP", 00:22:22.709 "adrfam": "IPv4", 00:22:22.709 "traddr": "10.0.0.1", 00:22:22.709 "trsvcid": "52520" 00:22:22.709 }, 00:22:22.709 "auth": { 00:22:22.709 "state": "completed", 00:22:22.709 "digest": "sha384", 00:22:22.709 "dhgroup": "ffdhe8192" 00:22:22.709 } 00:22:22.709 } 00:22:22.709 ]' 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.709 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.969 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.969 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.969 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.969 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.969 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.228 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:23.228 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.796 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.797 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.365 00:22:24.365 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.365 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.365 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.624 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.624 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.624 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.624 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.624 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.624 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.624 { 00:22:24.624 "cntlid": 95, 00:22:24.624 "qid": 0, 00:22:24.624 "state": "enabled", 00:22:24.624 "thread": "nvmf_tgt_poll_group_000", 00:22:24.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:24.625 "listen_address": { 00:22:24.625 "trtype": "TCP", 00:22:24.625 "adrfam": "IPv4", 00:22:24.625 "traddr": "10.0.0.2", 00:22:24.625 "trsvcid": "4420" 00:22:24.625 }, 00:22:24.625 "peer_address": { 00:22:24.625 "trtype": "TCP", 00:22:24.625 "adrfam": "IPv4", 00:22:24.625 "traddr": "10.0.0.1", 00:22:24.625 "trsvcid": "52538" 00:22:24.625 }, 00:22:24.625 "auth": { 00:22:24.625 "state": "completed", 00:22:24.625 "digest": "sha384", 00:22:24.625 "dhgroup": "ffdhe8192" 00:22:24.625 } 00:22:24.625 } 00:22:24.625 ]' 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.625 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.884 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:24.885 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:25.453 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.713 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.972 00:22:25.972 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.972 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.972 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.231 { 00:22:26.231 "cntlid": 97, 00:22:26.231 "qid": 0, 00:22:26.231 "state": "enabled", 00:22:26.231 "thread": "nvmf_tgt_poll_group_000", 00:22:26.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:26.231 "listen_address": { 00:22:26.231 "trtype": "TCP", 00:22:26.231 "adrfam": "IPv4", 00:22:26.231 "traddr": "10.0.0.2", 00:22:26.231 "trsvcid": "4420" 00:22:26.231 }, 00:22:26.231 "peer_address": { 00:22:26.231 "trtype": "TCP", 00:22:26.231 "adrfam": "IPv4", 00:22:26.231 "traddr": "10.0.0.1", 00:22:26.231 "trsvcid": "52556" 00:22:26.231 }, 00:22:26.231 "auth": { 00:22:26.231 "state": "completed", 00:22:26.231 "digest": "sha512", 00:22:26.231 "dhgroup": "null" 00:22:26.231 } 00:22:26.231 } 00:22:26.231 ]' 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.231 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.490 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:26.490 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:27.065 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.065 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.371 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.650 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.650 { 00:22:27.650 "cntlid": 99, 00:22:27.650 "qid": 0, 00:22:27.650 "state": "enabled", 00:22:27.650 "thread": "nvmf_tgt_poll_group_000", 00:22:27.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:27.650 "listen_address": { 00:22:27.650 "trtype": "TCP", 00:22:27.650 "adrfam": "IPv4", 00:22:27.650 "traddr": "10.0.0.2", 00:22:27.650 "trsvcid": "4420" 00:22:27.650 }, 00:22:27.650 "peer_address": { 00:22:27.650 "trtype": "TCP", 00:22:27.650 "adrfam": "IPv4", 00:22:27.650 "traddr": "10.0.0.1", 00:22:27.650 "trsvcid": "52588" 00:22:27.650 }, 00:22:27.650 "auth": { 00:22:27.650 "state": "completed", 00:22:27.650 "digest": "sha512", 00:22:27.650 "dhgroup": "null" 00:22:27.650 } 00:22:27.650 } 00:22:27.650 ]' 00:22:27.650 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.932 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.216 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:28.216 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.827 12:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.112 00:22:29.112 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.112 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.112 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.395 { 00:22:29.395 "cntlid": 101, 00:22:29.395 "qid": 0, 00:22:29.395 "state": "enabled", 00:22:29.395 "thread": "nvmf_tgt_poll_group_000", 00:22:29.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:29.395 "listen_address": { 00:22:29.395 "trtype": "TCP", 00:22:29.395 "adrfam": "IPv4", 00:22:29.395 "traddr": "10.0.0.2", 00:22:29.395 "trsvcid": "4420" 00:22:29.395 }, 00:22:29.395 "peer_address": { 00:22:29.395 "trtype": "TCP", 00:22:29.395 "adrfam": "IPv4", 00:22:29.395 "traddr": "10.0.0.1", 00:22:29.395 "trsvcid": "52602" 00:22:29.395 }, 00:22:29.395 "auth": { 00:22:29.395 "state": "completed", 00:22:29.395 "digest": "sha512", 00:22:29.395 "dhgroup": "null" 00:22:29.395 } 00:22:29.395 } 00:22:29.395 ]' 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.395 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.676 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:29.676 12:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.301 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.595 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.595 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.876 { 00:22:30.876 "cntlid": 103, 00:22:30.876 "qid": 0, 00:22:30.876 "state": "enabled", 00:22:30.876 "thread": "nvmf_tgt_poll_group_000", 00:22:30.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:30.876 "listen_address": { 00:22:30.876 "trtype": "TCP", 00:22:30.876 "adrfam": "IPv4", 00:22:30.876 "traddr": "10.0.0.2", 00:22:30.876 "trsvcid": "4420" 00:22:30.876 }, 00:22:30.876 "peer_address": { 00:22:30.876 "trtype": "TCP", 00:22:30.876 "adrfam": "IPv4", 00:22:30.876 "traddr": "10.0.0.1", 00:22:30.876 "trsvcid": "52640" 00:22:30.876 }, 00:22:30.876 "auth": { 00:22:30.876 "state": "completed", 00:22:30.876 "digest": "sha512", 00:22:30.876 "dhgroup": "null" 00:22:30.876 } 00:22:30.876 } 00:22:30.876 ]' 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.876 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.877 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.877 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:30.877 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.173 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.173 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.173 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.173 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:31.173 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.805 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:32.064 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:32.064 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.064 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.064 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:32.064 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:32.064 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.065 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.324 00:22:32.324 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.324 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.324 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.583 { 00:22:32.583 "cntlid": 105, 00:22:32.583 "qid": 0, 00:22:32.583 "state": "enabled", 00:22:32.583 "thread": "nvmf_tgt_poll_group_000", 00:22:32.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:32.583 "listen_address": { 00:22:32.583 "trtype": "TCP", 00:22:32.583 "adrfam": "IPv4", 00:22:32.583 "traddr": "10.0.0.2", 00:22:32.583 "trsvcid": "4420" 00:22:32.583 }, 00:22:32.583 "peer_address": { 00:22:32.583 "trtype": "TCP", 00:22:32.583 "adrfam": "IPv4", 00:22:32.583 "traddr": "10.0.0.1", 00:22:32.583 "trsvcid": "51266" 00:22:32.583 }, 00:22:32.583 "auth": { 00:22:32.583 "state": "completed", 00:22:32.583 "digest": "sha512", 00:22:32.583 "dhgroup": "ffdhe2048" 00:22:32.583 } 00:22:32.583 } 00:22:32.583 ]' 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.583 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.843 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:32.843 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.411 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.670 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:33.670 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.670 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.670 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:33.670 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:33.670 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.671 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.930 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.930 { 00:22:33.930 "cntlid": 107, 00:22:33.930 "qid": 0, 00:22:33.930 "state": "enabled", 00:22:33.930 "thread": "nvmf_tgt_poll_group_000", 00:22:33.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:33.930 "listen_address": { 00:22:33.930 "trtype": "TCP", 00:22:33.930 "adrfam": "IPv4", 00:22:33.930 "traddr": "10.0.0.2", 00:22:33.930 "trsvcid": "4420" 00:22:33.930 }, 00:22:33.930 "peer_address": { 00:22:33.930 "trtype": "TCP", 00:22:33.930 "adrfam": "IPv4", 00:22:33.930 "traddr": "10.0.0.1", 00:22:33.930 "trsvcid": "51298" 00:22:33.930 }, 00:22:33.930 "auth": { 00:22:33.930 "state": "completed", 00:22:33.930 "digest": "sha512", 00:22:33.930 "dhgroup": "ffdhe2048" 00:22:33.930 } 00:22:33.930 } 00:22:33.930 ]' 00:22:33.930 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.189 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.448 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:34.448 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:35.016 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:35.016 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:35.016 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.016 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.016 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:35.016 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:35.016 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.275 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.275 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.535 { 00:22:35.535 "cntlid": 109, 00:22:35.535 "qid": 0, 00:22:35.535 "state": "enabled", 00:22:35.535 "thread": "nvmf_tgt_poll_group_000", 00:22:35.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:35.535 "listen_address": { 00:22:35.535 "trtype": "TCP", 00:22:35.535 "adrfam": "IPv4", 00:22:35.535 "traddr": "10.0.0.2", 00:22:35.535 "trsvcid": "4420" 00:22:35.535 }, 00:22:35.535 "peer_address": { 00:22:35.535 "trtype": "TCP", 00:22:35.535 "adrfam": "IPv4", 00:22:35.535 "traddr": "10.0.0.1", 00:22:35.535 "trsvcid": "51326" 00:22:35.535 }, 00:22:35.535 "auth": { 00:22:35.535 "state": "completed", 00:22:35.535 "digest": "sha512", 00:22:35.535 "dhgroup": "ffdhe2048" 00:22:35.535 } 00:22:35.535 } 00:22:35.535 ]' 00:22:35.535 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.794 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.052 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:36.053 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.619 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.878 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.878 00:22:37.137 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.137 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.137 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.137 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.137 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.137 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.137 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.138 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.138 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.138 { 00:22:37.138 "cntlid": 111, 00:22:37.138 "qid": 0, 00:22:37.138 "state": "enabled", 00:22:37.138 "thread": "nvmf_tgt_poll_group_000", 00:22:37.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:37.138 "listen_address": { 00:22:37.138 "trtype": "TCP", 00:22:37.138 "adrfam": "IPv4", 00:22:37.138 "traddr": "10.0.0.2", 00:22:37.138 "trsvcid": "4420" 00:22:37.138 }, 00:22:37.138 "peer_address": { 00:22:37.138 "trtype": "TCP", 00:22:37.138 "adrfam": "IPv4", 00:22:37.138 "traddr": "10.0.0.1", 00:22:37.138 "trsvcid": "51356" 00:22:37.138 }, 00:22:37.138 "auth": { 00:22:37.138 "state": "completed", 00:22:37.138 "digest": "sha512", 00:22:37.138 "dhgroup": "ffdhe2048" 00:22:37.138 } 00:22:37.138 } 00:22:37.138 ]' 00:22:37.138 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.397 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.656 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:37.656 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.224 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:38.225 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.484 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.743 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.743 { 00:22:38.743 "cntlid": 113, 00:22:38.743 "qid": 0, 00:22:38.743 "state": "enabled", 00:22:38.743 "thread": "nvmf_tgt_poll_group_000", 00:22:38.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:38.743 "listen_address": { 00:22:38.743 "trtype": "TCP", 00:22:38.743 "adrfam": "IPv4", 00:22:38.743 "traddr": "10.0.0.2", 00:22:38.743 "trsvcid": "4420" 00:22:38.743 }, 00:22:38.743 "peer_address": { 00:22:38.743 "trtype": "TCP", 00:22:38.743 "adrfam": "IPv4", 00:22:38.743 "traddr": "10.0.0.1", 00:22:38.743 "trsvcid": "51386" 00:22:38.743 }, 00:22:38.743 "auth": { 00:22:38.743 "state": "completed", 00:22:38.743 "digest": "sha512", 00:22:38.743 "dhgroup": "ffdhe3072" 00:22:38.743 } 00:22:38.743 } 00:22:38.743 ]' 00:22:38.743 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.003 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.261 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:39.261 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:39.829 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.088 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.347 00:22:40.347 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.347 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.347 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.347 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.348 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.348 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.348 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.348 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.348 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.348 { 00:22:40.348 "cntlid": 115, 00:22:40.348 "qid": 0, 00:22:40.348 "state": "enabled", 00:22:40.348 "thread": "nvmf_tgt_poll_group_000", 00:22:40.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:40.348 "listen_address": { 00:22:40.348 "trtype": "TCP", 00:22:40.348 "adrfam": "IPv4", 00:22:40.348 "traddr": "10.0.0.2", 00:22:40.348 "trsvcid": "4420" 00:22:40.348 }, 00:22:40.348 "peer_address": { 00:22:40.348 "trtype": "TCP", 00:22:40.348 "adrfam": "IPv4", 00:22:40.348 "traddr": "10.0.0.1", 00:22:40.348 "trsvcid": "51414" 00:22:40.348 }, 00:22:40.348 "auth": { 00:22:40.348 "state": "completed", 00:22:40.348 "digest": "sha512", 00:22:40.348 "dhgroup": "ffdhe3072" 00:22:40.348 } 00:22:40.348 } 00:22:40.348 ]' 00:22:40.348 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.607 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.866 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:40.866 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.434 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.693 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.694 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.694 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.952 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.952 { 00:22:41.952 "cntlid": 117, 00:22:41.952 "qid": 0, 00:22:41.952 "state": "enabled", 00:22:41.952 "thread": "nvmf_tgt_poll_group_000", 00:22:41.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:41.952 "listen_address": { 00:22:41.952 "trtype": "TCP", 00:22:41.952 "adrfam": "IPv4", 00:22:41.952 "traddr": "10.0.0.2", 00:22:41.952 "trsvcid": "4420" 00:22:41.952 }, 00:22:41.952 "peer_address": { 00:22:41.952 "trtype": "TCP", 00:22:41.952 "adrfam": "IPv4", 00:22:41.952 "traddr": "10.0.0.1", 00:22:41.952 "trsvcid": "42658" 00:22:41.952 }, 00:22:41.952 "auth": { 00:22:41.952 "state": "completed", 00:22:41.952 "digest": "sha512", 00:22:41.952 "dhgroup": "ffdhe3072" 00:22:41.952 } 00:22:41.952 } 00:22:41.952 ]' 00:22:41.952 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.211 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.211 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.211 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:42.211 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.211 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.211 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.212 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.471 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:42.471 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:43.039 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:43.039 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.040 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.299 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.557 { 00:22:43.557 "cntlid": 119, 00:22:43.557 "qid": 0, 00:22:43.557 "state": "enabled", 00:22:43.557 "thread": "nvmf_tgt_poll_group_000", 00:22:43.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:43.557 "listen_address": { 00:22:43.557 "trtype": "TCP", 00:22:43.557 "adrfam": "IPv4", 00:22:43.557 "traddr": "10.0.0.2", 00:22:43.557 "trsvcid": "4420" 00:22:43.557 }, 00:22:43.557 "peer_address": { 00:22:43.557 "trtype": "TCP", 00:22:43.557 "adrfam": "IPv4", 00:22:43.557 "traddr": "10.0.0.1", 00:22:43.557 "trsvcid": "42704" 00:22:43.557 }, 00:22:43.557 "auth": { 00:22:43.557 "state": "completed", 00:22:43.557 "digest": "sha512", 00:22:43.557 "dhgroup": "ffdhe3072" 00:22:43.557 } 00:22:43.557 } 00:22:43.557 ]' 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.557 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.815 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.815 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.815 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.815 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.815 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.074 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:44.074 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.643 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.643 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.643 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.643 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.643 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.643 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.902 00:22:45.164 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.165 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.165 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.165 { 00:22:45.165 "cntlid": 121, 00:22:45.165 "qid": 0, 00:22:45.165 "state": "enabled", 00:22:45.165 "thread": "nvmf_tgt_poll_group_000", 00:22:45.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:45.165 "listen_address": { 00:22:45.165 "trtype": "TCP", 00:22:45.165 "adrfam": "IPv4", 00:22:45.165 "traddr": "10.0.0.2", 00:22:45.165 "trsvcid": "4420" 00:22:45.165 }, 00:22:45.165 "peer_address": { 00:22:45.165 "trtype": "TCP", 00:22:45.165 "adrfam": "IPv4", 00:22:45.165 "traddr": "10.0.0.1", 00:22:45.165 "trsvcid": "42736" 00:22:45.165 }, 00:22:45.165 "auth": { 00:22:45.165 "state": "completed", 00:22:45.165 "digest": "sha512", 00:22:45.165 "dhgroup": "ffdhe4096" 00:22:45.165 } 00:22:45.165 } 00:22:45.165 ]' 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.165 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.427 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:45.427 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.427 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.427 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.427 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.685 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:45.685 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.252 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.253 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.253 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.512 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.770 { 00:22:46.770 "cntlid": 123, 00:22:46.770 "qid": 0, 00:22:46.770 "state": "enabled", 00:22:46.770 "thread": "nvmf_tgt_poll_group_000", 00:22:46.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:46.770 "listen_address": { 00:22:46.770 "trtype": "TCP", 00:22:46.770 "adrfam": "IPv4", 00:22:46.770 "traddr": "10.0.0.2", 00:22:46.770 "trsvcid": "4420" 00:22:46.770 }, 00:22:46.770 "peer_address": { 00:22:46.770 "trtype": "TCP", 00:22:46.770 "adrfam": "IPv4", 00:22:46.770 "traddr": "10.0.0.1", 00:22:46.770 "trsvcid": "42760" 00:22:46.770 }, 00:22:46.770 "auth": { 00:22:46.770 "state": "completed", 00:22:46.770 "digest": "sha512", 00:22:46.770 "dhgroup": "ffdhe4096" 00:22:46.770 } 00:22:46.770 } 00:22:46.770 ]' 00:22:46.770 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.029 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.288 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:47.288 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.857 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.116 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.116 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.116 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.116 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.375 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.375 { 00:22:48.375 "cntlid": 125, 00:22:48.375 "qid": 0, 00:22:48.375 "state": "enabled", 00:22:48.375 "thread": "nvmf_tgt_poll_group_000", 00:22:48.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:48.375 "listen_address": { 00:22:48.375 "trtype": "TCP", 00:22:48.375 "adrfam": "IPv4", 00:22:48.375 "traddr": "10.0.0.2", 00:22:48.375 "trsvcid": "4420" 00:22:48.375 }, 00:22:48.375 "peer_address": { 00:22:48.375 "trtype": "TCP", 00:22:48.375 "adrfam": "IPv4", 00:22:48.375 "traddr": "10.0.0.1", 00:22:48.375 "trsvcid": "42800" 00:22:48.375 }, 00:22:48.375 "auth": { 00:22:48.375 "state": "completed", 00:22:48.375 "digest": "sha512", 00:22:48.375 "dhgroup": "ffdhe4096" 00:22:48.375 } 00:22:48.375 } 00:22:48.375 ]' 00:22:48.375 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.634 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.893 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:48.893 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.461 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.720 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.720 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.720 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.720 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.979 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.979 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.979 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.979 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.979 { 00:22:49.979 "cntlid": 127, 00:22:49.979 "qid": 0, 00:22:49.979 "state": "enabled", 00:22:49.979 "thread": "nvmf_tgt_poll_group_000", 00:22:49.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:49.979 "listen_address": { 00:22:49.979 "trtype": "TCP", 00:22:49.979 "adrfam": "IPv4", 00:22:49.979 "traddr": "10.0.0.2", 00:22:49.979 "trsvcid": "4420" 00:22:49.979 }, 00:22:49.979 "peer_address": { 00:22:49.979 "trtype": "TCP", 00:22:49.979 "adrfam": "IPv4", 00:22:49.979 "traddr": "10.0.0.1", 00:22:49.979 "trsvcid": "42836" 00:22:49.979 }, 00:22:49.979 "auth": { 00:22:49.979 "state": "completed", 00:22:49.979 "digest": "sha512", 00:22:49.979 "dhgroup": "ffdhe4096" 00:22:49.979 } 00:22:49.979 } 00:22:49.979 ]' 00:22:49.979 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.238 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.496 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:50.496 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.064 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.323 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.582 00:22:51.582 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.582 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.582 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.841 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.841 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.841 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.842 { 00:22:51.842 "cntlid": 129, 00:22:51.842 "qid": 0, 00:22:51.842 "state": "enabled", 00:22:51.842 "thread": "nvmf_tgt_poll_group_000", 00:22:51.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:51.842 "listen_address": { 00:22:51.842 "trtype": "TCP", 00:22:51.842 "adrfam": "IPv4", 00:22:51.842 "traddr": "10.0.0.2", 00:22:51.842 "trsvcid": "4420" 00:22:51.842 }, 00:22:51.842 "peer_address": { 00:22:51.842 "trtype": "TCP", 00:22:51.842 "adrfam": "IPv4", 00:22:51.842 "traddr": "10.0.0.1", 00:22:51.842 "trsvcid": "54048" 00:22:51.842 }, 00:22:51.842 "auth": { 00:22:51.842 "state": "completed", 00:22:51.842 "digest": "sha512", 00:22:51.842 "dhgroup": "ffdhe6144" 00:22:51.842 } 00:22:51.842 } 00:22:51.842 ]' 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.842 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.101 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:52.101 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:52.668 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.669 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.927 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.186 00:22:53.186 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.186 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.186 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.445 { 00:22:53.445 "cntlid": 131, 00:22:53.445 "qid": 0, 00:22:53.445 "state": "enabled", 00:22:53.445 "thread": "nvmf_tgt_poll_group_000", 00:22:53.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:53.445 "listen_address": { 00:22:53.445 "trtype": "TCP", 00:22:53.445 "adrfam": "IPv4", 00:22:53.445 "traddr": "10.0.0.2", 00:22:53.445 "trsvcid": "4420" 00:22:53.445 }, 00:22:53.445 "peer_address": { 00:22:53.445 "trtype": "TCP", 00:22:53.445 "adrfam": "IPv4", 00:22:53.445 "traddr": "10.0.0.1", 00:22:53.445 "trsvcid": "54074" 00:22:53.445 }, 00:22:53.445 "auth": { 00:22:53.445 "state": "completed", 00:22:53.445 "digest": "sha512", 00:22:53.445 "dhgroup": "ffdhe6144" 00:22:53.445 } 00:22:53.445 } 00:22:53.445 ]' 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.445 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.704 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:53.704 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:54.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.531 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.789 00:22:54.790 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.790 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.790 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.049 { 00:22:55.049 "cntlid": 133, 00:22:55.049 "qid": 0, 00:22:55.049 "state": "enabled", 00:22:55.049 "thread": "nvmf_tgt_poll_group_000", 00:22:55.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:55.049 "listen_address": { 00:22:55.049 "trtype": "TCP", 00:22:55.049 "adrfam": "IPv4", 00:22:55.049 "traddr": "10.0.0.2", 00:22:55.049 "trsvcid": "4420" 00:22:55.049 }, 00:22:55.049 "peer_address": { 00:22:55.049 "trtype": "TCP", 00:22:55.049 "adrfam": "IPv4", 00:22:55.049 "traddr": "10.0.0.1", 00:22:55.049 "trsvcid": "54110" 00:22:55.049 }, 00:22:55.049 "auth": { 00:22:55.049 "state": "completed", 00:22:55.049 "digest": "sha512", 00:22:55.049 "dhgroup": "ffdhe6144" 00:22:55.049 } 00:22:55.049 } 00:22:55.049 ]' 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.049 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.308 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.308 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.308 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.308 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:55.308 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:22:55.876 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.876 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:55.876 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.876 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.136 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.136 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.136 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:56.136 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.136 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.704 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.704 { 00:22:56.704 "cntlid": 135, 00:22:56.704 "qid": 0, 00:22:56.704 "state": "enabled", 00:22:56.704 "thread": "nvmf_tgt_poll_group_000", 00:22:56.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:56.704 "listen_address": { 00:22:56.704 "trtype": "TCP", 00:22:56.704 "adrfam": "IPv4", 00:22:56.704 "traddr": "10.0.0.2", 00:22:56.704 "trsvcid": "4420" 00:22:56.704 }, 00:22:56.704 "peer_address": { 00:22:56.704 "trtype": "TCP", 00:22:56.704 "adrfam": "IPv4", 00:22:56.704 "traddr": "10.0.0.1", 00:22:56.704 "trsvcid": "54146" 00:22:56.704 }, 00:22:56.704 "auth": { 00:22:56.704 "state": "completed", 00:22:56.704 "digest": "sha512", 00:22:56.704 "dhgroup": "ffdhe6144" 00:22:56.704 } 00:22:56.704 } 00:22:56.704 ]' 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.704 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.963 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:56.963 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.963 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.963 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.963 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.963 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:56.963 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:22:57.532 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.791 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.359 00:22:58.359 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.359 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.359 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.618 { 00:22:58.618 "cntlid": 137, 00:22:58.618 "qid": 0, 00:22:58.618 "state": "enabled", 00:22:58.618 "thread": "nvmf_tgt_poll_group_000", 00:22:58.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:58.618 "listen_address": { 00:22:58.618 "trtype": "TCP", 00:22:58.618 "adrfam": "IPv4", 00:22:58.618 "traddr": "10.0.0.2", 00:22:58.618 "trsvcid": "4420" 00:22:58.618 }, 00:22:58.618 "peer_address": { 00:22:58.618 "trtype": "TCP", 00:22:58.618 "adrfam": "IPv4", 00:22:58.618 "traddr": "10.0.0.1", 00:22:58.618 "trsvcid": "54174" 00:22:58.618 }, 00:22:58.618 "auth": { 00:22:58.618 "state": "completed", 00:22:58.618 "digest": "sha512", 00:22:58.618 "dhgroup": "ffdhe8192" 00:22:58.618 } 00:22:58.618 } 00:22:58.618 ]' 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.618 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.877 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:58.877 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:22:59.444 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.445 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.703 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.271 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.271 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.531 { 00:23:00.531 "cntlid": 139, 00:23:00.531 "qid": 0, 00:23:00.531 "state": "enabled", 00:23:00.531 "thread": "nvmf_tgt_poll_group_000", 00:23:00.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:00.531 "listen_address": { 00:23:00.531 "trtype": "TCP", 00:23:00.531 "adrfam": "IPv4", 00:23:00.531 "traddr": "10.0.0.2", 00:23:00.531 "trsvcid": "4420" 00:23:00.531 }, 00:23:00.531 "peer_address": { 00:23:00.531 "trtype": "TCP", 00:23:00.531 "adrfam": "IPv4", 00:23:00.531 "traddr": "10.0.0.1", 00:23:00.531 "trsvcid": "54192" 00:23:00.531 }, 00:23:00.531 "auth": { 00:23:00.531 "state": "completed", 00:23:00.531 "digest": "sha512", 00:23:00.531 "dhgroup": "ffdhe8192" 00:23:00.531 } 00:23:00.531 } 00:23:00.531 ]' 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.531 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.790 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:23:00.790 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: --dhchap-ctrl-secret DHHC-1:02:ZThlMjA4YjFhZWE2NzgxODc1YmY1Yzc2ZDEwZDVhYzcyMTY3MWQzMmFjZTQwMTUy9S132A==: 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.358 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.617 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.876 00:23:02.135 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.135 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.135 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.135 { 00:23:02.135 "cntlid": 141, 00:23:02.135 "qid": 0, 00:23:02.135 "state": "enabled", 00:23:02.135 "thread": "nvmf_tgt_poll_group_000", 00:23:02.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:02.135 "listen_address": { 00:23:02.135 "trtype": "TCP", 00:23:02.135 "adrfam": "IPv4", 00:23:02.135 "traddr": "10.0.0.2", 00:23:02.135 "trsvcid": "4420" 00:23:02.135 }, 00:23:02.135 "peer_address": { 00:23:02.135 "trtype": "TCP", 00:23:02.135 "adrfam": "IPv4", 00:23:02.135 "traddr": "10.0.0.1", 00:23:02.135 "trsvcid": "48116" 00:23:02.135 }, 00:23:02.135 "auth": { 00:23:02.135 "state": "completed", 00:23:02.135 "digest": "sha512", 00:23:02.135 "dhgroup": "ffdhe8192" 00:23:02.135 } 00:23:02.135 } 00:23:02.135 ]' 00:23:02.135 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.394 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.652 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:23:02.652 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:01:NWYzNWZiMDFlZjg2ZWQ4OGNjMGQ4MGYwMGZiMzZlZDnMCX9k: 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.218 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:03.219 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.219 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.786 00:23:03.786 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.786 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.786 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.045 { 00:23:04.045 "cntlid": 143, 00:23:04.045 "qid": 0, 00:23:04.045 "state": "enabled", 00:23:04.045 "thread": "nvmf_tgt_poll_group_000", 00:23:04.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:04.045 "listen_address": { 00:23:04.045 "trtype": "TCP", 00:23:04.045 "adrfam": "IPv4", 00:23:04.045 "traddr": "10.0.0.2", 00:23:04.045 "trsvcid": "4420" 00:23:04.045 }, 00:23:04.045 "peer_address": { 00:23:04.045 "trtype": "TCP", 00:23:04.045 "adrfam": "IPv4", 00:23:04.045 "traddr": "10.0.0.1", 00:23:04.045 "trsvcid": "48144" 00:23:04.045 }, 00:23:04.045 "auth": { 00:23:04.045 "state": "completed", 00:23:04.045 "digest": "sha512", 00:23:04.045 "dhgroup": "ffdhe8192" 00:23:04.045 } 00:23:04.045 } 00:23:04.045 ]' 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.045 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.045 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.045 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.045 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.046 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.046 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.305 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:23:04.305 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:04.872 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.131 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:05.131 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.131 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.131 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.132 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.700 00:23:05.700 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.700 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.700 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.959 { 00:23:05.959 "cntlid": 145, 00:23:05.959 "qid": 0, 00:23:05.959 "state": "enabled", 00:23:05.959 "thread": "nvmf_tgt_poll_group_000", 00:23:05.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:05.959 "listen_address": { 00:23:05.959 "trtype": "TCP", 00:23:05.959 "adrfam": "IPv4", 00:23:05.959 "traddr": "10.0.0.2", 00:23:05.959 "trsvcid": "4420" 00:23:05.959 }, 00:23:05.959 "peer_address": { 00:23:05.959 "trtype": "TCP", 00:23:05.959 "adrfam": "IPv4", 00:23:05.959 "traddr": "10.0.0.1", 00:23:05.959 "trsvcid": "48176" 00:23:05.959 }, 00:23:05.959 "auth": { 00:23:05.959 "state": "completed", 00:23:05.959 "digest": "sha512", 00:23:05.959 "dhgroup": "ffdhe8192" 00:23:05.959 } 00:23:05.959 } 00:23:05.959 ]' 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.959 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.218 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:23:06.218 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGI3NGI4ODcwMTdiZjE3NTA0MGNlZWFlYTk0YmUwZWNmMzQyMjA3ZGE4YzFlZDgx1hOX6A==: --dhchap-ctrl-secret DHHC-1:03:YTNlNGRiNjc2ODFhY2JhYzE0NTFhNzY5NGNiNGMwN2RmNjA3NzVkNjdlMDdlMmRhZmNhZTc5YWU5ODgzNTM3Zlh5EVM=: 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:06.787 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:07.356 request: 00:23:07.356 { 00:23:07.356 "name": "nvme0", 00:23:07.356 "trtype": "tcp", 00:23:07.356 "traddr": "10.0.0.2", 00:23:07.356 "adrfam": "ipv4", 00:23:07.356 "trsvcid": "4420", 00:23:07.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:07.356 "prchk_reftag": false, 00:23:07.356 "prchk_guard": false, 00:23:07.356 "hdgst": false, 00:23:07.356 "ddgst": false, 00:23:07.356 "dhchap_key": "key2", 00:23:07.356 "allow_unrecognized_csi": false, 00:23:07.356 "method": "bdev_nvme_attach_controller", 00:23:07.356 "req_id": 1 00:23:07.356 } 00:23:07.356 Got JSON-RPC error response 00:23:07.356 response: 00:23:07.356 { 00:23:07.356 "code": -5, 00:23:07.356 "message": "Input/output error" 00:23:07.356 } 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.356 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.615 request: 00:23:07.615 { 00:23:07.615 "name": "nvme0", 00:23:07.615 "trtype": "tcp", 00:23:07.615 "traddr": "10.0.0.2", 00:23:07.615 "adrfam": "ipv4", 00:23:07.615 "trsvcid": "4420", 00:23:07.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:07.615 "prchk_reftag": false, 00:23:07.615 "prchk_guard": false, 00:23:07.615 "hdgst": false, 00:23:07.615 "ddgst": false, 00:23:07.615 "dhchap_key": "key1", 00:23:07.615 "dhchap_ctrlr_key": "ckey2", 00:23:07.615 "allow_unrecognized_csi": false, 00:23:07.615 "method": "bdev_nvme_attach_controller", 00:23:07.615 "req_id": 1 00:23:07.615 } 00:23:07.615 Got JSON-RPC error response 00:23:07.615 response: 00:23:07.615 { 00:23:07.615 "code": -5, 00:23:07.615 "message": "Input/output error" 00:23:07.615 } 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.615 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.875 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.134 request: 00:23:08.134 { 00:23:08.134 "name": "nvme0", 00:23:08.134 "trtype": "tcp", 00:23:08.134 "traddr": "10.0.0.2", 00:23:08.134 "adrfam": "ipv4", 00:23:08.134 "trsvcid": "4420", 00:23:08.134 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:08.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:08.134 "prchk_reftag": false, 00:23:08.134 "prchk_guard": false, 00:23:08.134 "hdgst": false, 00:23:08.134 "ddgst": false, 00:23:08.134 "dhchap_key": "key1", 00:23:08.134 "dhchap_ctrlr_key": "ckey1", 00:23:08.134 "allow_unrecognized_csi": false, 00:23:08.134 "method": "bdev_nvme_attach_controller", 00:23:08.134 "req_id": 1 00:23:08.134 } 00:23:08.134 Got JSON-RPC error response 00:23:08.134 response: 00:23:08.134 { 00:23:08.134 "code": -5, 00:23:08.134 "message": "Input/output error" 00:23:08.134 } 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 362798 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 362798 ']' 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 362798 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.134 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362798 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362798' 00:23:08.394 killing process with pid 362798 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 362798 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 362798 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=384124 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 384124 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 384124 ']' 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.394 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 384124 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 384124 ']' 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.653 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 null0 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Df1 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.M8p ]] 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.M8p 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.913 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9pO 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.iAd ]] 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iAd 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DXi 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.914 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.8hU ]] 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8hU 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.176 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2JX 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.177 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.177 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.177 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.177 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.177 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.744 nvme0n1 00:23:09.744 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.744 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.744 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.003 { 00:23:10.003 "cntlid": 1, 00:23:10.003 "qid": 0, 00:23:10.003 "state": "enabled", 00:23:10.003 "thread": "nvmf_tgt_poll_group_000", 00:23:10.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:10.003 "listen_address": { 00:23:10.003 "trtype": "TCP", 00:23:10.003 "adrfam": "IPv4", 00:23:10.003 "traddr": "10.0.0.2", 00:23:10.003 "trsvcid": "4420" 00:23:10.003 }, 00:23:10.003 "peer_address": { 00:23:10.003 "trtype": "TCP", 00:23:10.003 "adrfam": "IPv4", 00:23:10.003 "traddr": "10.0.0.1", 00:23:10.003 "trsvcid": "48222" 00:23:10.003 }, 00:23:10.003 "auth": { 00:23:10.003 "state": "completed", 00:23:10.003 "digest": "sha512", 00:23:10.003 "dhgroup": "ffdhe8192" 00:23:10.003 } 00:23:10.003 } 00:23:10.003 ]' 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.003 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.003 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:10.003 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.003 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.003 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.003 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.262 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:23:10.262 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:10.830 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.089 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.348 request: 00:23:11.348 { 00:23:11.348 "name": "nvme0", 00:23:11.348 "trtype": "tcp", 00:23:11.348 "traddr": "10.0.0.2", 00:23:11.348 "adrfam": "ipv4", 00:23:11.348 "trsvcid": "4420", 00:23:11.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:11.348 "prchk_reftag": false, 00:23:11.348 "prchk_guard": false, 00:23:11.348 "hdgst": false, 00:23:11.348 "ddgst": false, 00:23:11.348 "dhchap_key": "key3", 00:23:11.348 "allow_unrecognized_csi": false, 00:23:11.348 "method": "bdev_nvme_attach_controller", 00:23:11.348 "req_id": 1 00:23:11.348 } 00:23:11.348 Got JSON-RPC error response 00:23:11.348 response: 00:23:11.348 { 00:23:11.348 "code": -5, 00:23:11.348 "message": "Input/output error" 00:23:11.348 } 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:11.348 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.607 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.607 request: 00:23:11.607 { 00:23:11.607 "name": "nvme0", 00:23:11.607 "trtype": "tcp", 00:23:11.607 "traddr": "10.0.0.2", 00:23:11.607 "adrfam": "ipv4", 00:23:11.607 "trsvcid": "4420", 00:23:11.607 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:11.607 "prchk_reftag": false, 00:23:11.607 "prchk_guard": false, 00:23:11.607 "hdgst": false, 00:23:11.607 "ddgst": false, 00:23:11.607 "dhchap_key": "key3", 00:23:11.607 "allow_unrecognized_csi": false, 00:23:11.607 "method": "bdev_nvme_attach_controller", 00:23:11.607 "req_id": 1 00:23:11.607 } 00:23:11.607 Got JSON-RPC error response 00:23:11.607 response: 00:23:11.607 { 00:23:11.607 "code": -5, 00:23:11.607 "message": "Input/output error" 00:23:11.607 } 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:11.866 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.867 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.435 request: 00:23:12.435 { 00:23:12.435 "name": "nvme0", 00:23:12.435 "trtype": "tcp", 00:23:12.435 "traddr": "10.0.0.2", 00:23:12.435 "adrfam": "ipv4", 00:23:12.435 "trsvcid": "4420", 00:23:12.435 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:12.435 "prchk_reftag": false, 00:23:12.435 "prchk_guard": false, 00:23:12.435 "hdgst": false, 00:23:12.435 "ddgst": false, 00:23:12.435 "dhchap_key": "key0", 00:23:12.435 "dhchap_ctrlr_key": "key1", 00:23:12.435 "allow_unrecognized_csi": false, 00:23:12.435 "method": "bdev_nvme_attach_controller", 00:23:12.435 "req_id": 1 00:23:12.435 } 00:23:12.435 Got JSON-RPC error response 00:23:12.435 response: 00:23:12.435 { 00:23:12.435 "code": -5, 00:23:12.435 "message": "Input/output error" 00:23:12.435 } 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:12.435 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:12.694 nvme0n1 00:23:12.694 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:12.694 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:12.694 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.694 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.694 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.694 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:12.954 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:13.896 nvme0n1 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.896 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:14.153 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.153 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:23:14.153 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: --dhchap-ctrl-secret DHHC-1:03:YjdkYjBkODY4ZTcwZTAyMDBhOTllMTI1ZTNhODFkODU4MTNhMWVjODg1OTlmNjBkZDM4ZjJmOWYzZDE4NDBkZl0k4bM=: 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.721 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:14.980 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:15.238 request: 00:23:15.238 { 00:23:15.238 "name": "nvme0", 00:23:15.238 "trtype": "tcp", 00:23:15.238 "traddr": "10.0.0.2", 00:23:15.238 "adrfam": "ipv4", 00:23:15.238 "trsvcid": "4420", 00:23:15.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:15.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:23:15.238 "prchk_reftag": false, 00:23:15.238 "prchk_guard": false, 00:23:15.238 "hdgst": false, 00:23:15.238 "ddgst": false, 00:23:15.238 "dhchap_key": "key1", 00:23:15.238 "allow_unrecognized_csi": false, 00:23:15.238 "method": "bdev_nvme_attach_controller", 00:23:15.238 "req_id": 1 00:23:15.238 } 00:23:15.238 Got JSON-RPC error response 00:23:15.238 response: 00:23:15.238 { 00:23:15.238 "code": -5, 00:23:15.238 "message": "Input/output error" 00:23:15.238 } 00:23:15.496 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:15.496 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.496 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.496 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.496 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.497 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.497 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.065 nvme0n1 00:23:16.065 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:16.065 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:16.065 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.323 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.323 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.323 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:16.582 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:16.841 nvme0n1 00:23:16.841 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:16.841 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:16.841 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.100 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.100 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.100 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: '' 2s 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: ]] 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGViMjVlNTA4ZTdiZTBhZTAyYTM4MTRkY2ZmNTI3ZWTrNpHz: 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:17.100 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: 2s 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: ]] 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTNhZjFmN2Q4OTVmMTE1NjljZDhmNWNmNGM3YmJhZGFiMGExNWVlNDYwNzc0Zjhkjnv50w==: 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:19.636 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:21.541 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:22.109 nvme0n1 00:23:22.109 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.109 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.109 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.109 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.109 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.109 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.676 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:22.676 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:22.676 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:22.677 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:22.935 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:22.935 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.935 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.194 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.761 request: 00:23:23.761 { 00:23:23.761 "name": "nvme0", 00:23:23.761 "dhchap_key": "key1", 00:23:23.761 "dhchap_ctrlr_key": "key3", 00:23:23.761 "method": "bdev_nvme_set_keys", 00:23:23.761 "req_id": 1 00:23:23.761 } 00:23:23.761 Got JSON-RPC error response 00:23:23.761 response: 00:23:23.761 { 00:23:23.761 "code": -13, 00:23:23.761 "message": "Permission denied" 00:23:23.761 } 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:23.761 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:24.697 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:24.697 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:24.697 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:24.956 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.894 nvme0n1 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.894 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:26.153 request: 00:23:26.153 { 00:23:26.153 "name": "nvme0", 00:23:26.153 "dhchap_key": "key2", 00:23:26.153 "dhchap_ctrlr_key": "key0", 00:23:26.153 "method": "bdev_nvme_set_keys", 00:23:26.153 "req_id": 1 00:23:26.153 } 00:23:26.153 Got JSON-RPC error response 00:23:26.153 response: 00:23:26.153 { 00:23:26.153 "code": -13, 00:23:26.153 "message": "Permission denied" 00:23:26.153 } 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:26.153 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.412 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:26.412 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:27.349 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:27.349 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:27.349 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 362818 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 362818 ']' 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 362818 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362818 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362818' 00:23:27.608 killing process with pid 362818 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 362818 00:23:27.608 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 362818 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.867 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.127 rmmod nvme_tcp 00:23:28.127 rmmod nvme_fabrics 00:23:28.127 rmmod nvme_keyring 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 384124 ']' 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 384124 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 384124 ']' 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 384124 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.127 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 384124 00:23:28.127 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:28.127 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:28.127 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 384124' 00:23:28.127 killing process with pid 384124 00:23:28.127 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 384124 00:23:28.127 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 384124 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.386 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Df1 /tmp/spdk.key-sha256.9pO /tmp/spdk.key-sha384.DXi /tmp/spdk.key-sha512.2JX /tmp/spdk.key-sha512.M8p /tmp/spdk.key-sha384.iAd /tmp/spdk.key-sha256.8hU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:30.289 00:23:30.289 real 2m34.871s 00:23:30.289 user 5m55.359s 00:23:30.289 sys 0m24.211s 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.289 ************************************ 00:23:30.289 END TEST nvmf_auth_target 00:23:30.289 ************************************ 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.289 ************************************ 00:23:30.289 START TEST nvmf_bdevio_no_huge 00:23:30.289 ************************************ 00:23:30.289 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:30.548 * Looking for test storage... 00:23:30.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.548 --rc genhtml_branch_coverage=1 00:23:30.548 --rc genhtml_function_coverage=1 00:23:30.548 --rc genhtml_legend=1 00:23:30.548 --rc geninfo_all_blocks=1 00:23:30.548 --rc geninfo_unexecuted_blocks=1 00:23:30.548 00:23:30.548 ' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.548 --rc genhtml_branch_coverage=1 00:23:30.548 --rc genhtml_function_coverage=1 00:23:30.548 --rc genhtml_legend=1 00:23:30.548 --rc geninfo_all_blocks=1 00:23:30.548 --rc geninfo_unexecuted_blocks=1 00:23:30.548 00:23:30.548 ' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.548 --rc genhtml_branch_coverage=1 00:23:30.548 --rc genhtml_function_coverage=1 00:23:30.548 --rc genhtml_legend=1 00:23:30.548 --rc geninfo_all_blocks=1 00:23:30.548 --rc geninfo_unexecuted_blocks=1 00:23:30.548 00:23:30.548 ' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.548 --rc genhtml_branch_coverage=1 00:23:30.548 --rc genhtml_function_coverage=1 00:23:30.548 --rc genhtml_legend=1 00:23:30.548 --rc geninfo_all_blocks=1 00:23:30.548 --rc geninfo_unexecuted_blocks=1 00:23:30.548 00:23:30.548 ' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.548 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.549 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:37.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:37.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:37.122 Found net devices under 0000:af:00.0: cvl_0_0 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.122 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:37.123 Found net devices under 0000:af:00.1: cvl_0_1 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:23:37.123 00:23:37.123 --- 10.0.0.2 ping statistics --- 00:23:37.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.123 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:23:37.123 00:23:37.123 --- 10.0.0.1 ping statistics --- 00:23:37.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.123 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=390956 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 390956 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 390956 ']' 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.123 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.123 [2024-12-16 12:45:02.406945] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:37.123 [2024-12-16 12:45:02.406990] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:37.123 [2024-12-16 12:45:02.482633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.123 [2024-12-16 12:45:02.546995] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.123 [2024-12-16 12:45:02.547029] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.123 [2024-12-16 12:45:02.547036] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.123 [2024-12-16 12:45:02.547042] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.123 [2024-12-16 12:45:02.547047] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.123 [2024-12-16 12:45:02.547156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:37.123 [2024-12-16 12:45:02.547285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:37.123 [2024-12-16 12:45:02.547305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.123 [2024-12-16 12:45:02.547306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 [2024-12-16 12:45:03.305083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 Malloc0 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.382 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.383 [2024-12-16 12:45:03.349356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:37.383 { 00:23:37.383 "params": { 00:23:37.383 "name": "Nvme$subsystem", 00:23:37.383 "trtype": "$TEST_TRANSPORT", 00:23:37.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.383 "adrfam": "ipv4", 00:23:37.383 "trsvcid": "$NVMF_PORT", 00:23:37.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.383 "hdgst": ${hdgst:-false}, 00:23:37.383 "ddgst": ${ddgst:-false} 00:23:37.383 }, 00:23:37.383 "method": "bdev_nvme_attach_controller" 00:23:37.383 } 00:23:37.383 EOF 00:23:37.383 )") 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:23:37.383 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:37.383 "params": { 00:23:37.383 "name": "Nvme1", 00:23:37.383 "trtype": "tcp", 00:23:37.383 "traddr": "10.0.0.2", 00:23:37.383 "adrfam": "ipv4", 00:23:37.383 "trsvcid": "4420", 00:23:37.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.383 "hdgst": false, 00:23:37.383 "ddgst": false 00:23:37.383 }, 00:23:37.383 "method": "bdev_nvme_attach_controller" 00:23:37.383 }' 00:23:37.383 [2024-12-16 12:45:03.399386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:37.383 [2024-12-16 12:45:03.399430] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid391199 ] 00:23:37.642 [2024-12-16 12:45:03.466229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:37.642 [2024-12-16 12:45:03.532018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.642 [2024-12-16 12:45:03.532135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.642 [2024-12-16 12:45:03.532135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.901 I/O targets: 00:23:37.901 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:37.901 00:23:37.901 00:23:37.901 CUnit - A unit testing framework for C - Version 2.1-3 00:23:37.901 http://cunit.sourceforge.net/ 00:23:37.901 00:23:37.901 00:23:37.901 Suite: bdevio tests on: Nvme1n1 00:23:37.901 Test: blockdev write read block ...passed 00:23:37.901 Test: blockdev write zeroes read block ...passed 00:23:37.901 Test: blockdev write zeroes read no split ...passed 00:23:37.901 Test: blockdev write zeroes read split ...passed 00:23:37.901 Test: blockdev write zeroes read split partial ...passed 00:23:37.901 Test: blockdev reset ...[2024-12-16 12:45:03.942400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.901 [2024-12-16 12:45:03.942462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeda7e0 (9): Bad file descriptor 00:23:38.160 [2024-12-16 12:45:04.037719] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:38.160 passed 00:23:38.160 Test: blockdev write read 8 blocks ...passed 00:23:38.160 Test: blockdev write read size > 128k ...passed 00:23:38.160 Test: blockdev write read invalid size ...passed 00:23:38.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:38.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:38.161 Test: blockdev write read max offset ...passed 00:23:38.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:38.161 Test: blockdev writev readv 8 blocks ...passed 00:23:38.161 Test: blockdev writev readv 30 x 1block ...passed 00:23:38.161 Test: blockdev writev readv block ...passed 00:23:38.161 Test: blockdev writev readv size > 128k ...passed 00:23:38.161 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:38.161 Test: blockdev comparev and writev ...[2024-12-16 12:45:04.206792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.206824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.206838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.206846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.207085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.207098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.207111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.207124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.207380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.207391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.207402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.207408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.207644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.207654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:38.161 [2024-12-16 12:45:04.207665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:38.161 [2024-12-16 12:45:04.207672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:38.420 passed 00:23:38.420 Test: blockdev nvme passthru rw ...passed 00:23:38.420 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:45:04.289494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:38.420 [2024-12-16 12:45:04.289511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:38.420 [2024-12-16 12:45:04.289615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:38.420 [2024-12-16 12:45:04.289625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:38.420 [2024-12-16 12:45:04.289729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:38.420 [2024-12-16 12:45:04.289738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:38.420 [2024-12-16 12:45:04.289848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:38.420 [2024-12-16 12:45:04.289857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:38.420 passed 00:23:38.420 Test: blockdev nvme admin passthru ...passed 00:23:38.420 Test: blockdev copy ...passed 00:23:38.420 00:23:38.420 Run Summary: Type Total Ran Passed Failed Inactive 00:23:38.420 suites 1 1 n/a 0 0 00:23:38.420 tests 23 23 23 0 0 00:23:38.420 asserts 152 152 152 0 n/a 00:23:38.420 00:23:38.420 Elapsed time = 1.062 seconds 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.679 rmmod nvme_tcp 00:23:38.679 rmmod nvme_fabrics 00:23:38.679 rmmod nvme_keyring 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 390956 ']' 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 390956 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 390956 ']' 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 390956 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390956 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390956' 00:23:38.679 killing process with pid 390956 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 390956 00:23:38.679 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 390956 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.248 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.152 00:23:41.152 real 0m10.784s 00:23:41.152 user 0m13.767s 00:23:41.152 sys 0m5.347s 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.152 ************************************ 00:23:41.152 END TEST nvmf_bdevio_no_huge 00:23:41.152 ************************************ 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:41.152 ************************************ 00:23:41.152 START TEST nvmf_tls 00:23:41.152 ************************************ 00:23:41.152 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:41.412 * Looking for test storage... 00:23:41.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.412 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.413 --rc genhtml_branch_coverage=1 00:23:41.413 --rc genhtml_function_coverage=1 00:23:41.413 --rc genhtml_legend=1 00:23:41.413 --rc geninfo_all_blocks=1 00:23:41.413 --rc geninfo_unexecuted_blocks=1 00:23:41.413 00:23:41.413 ' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.413 --rc genhtml_branch_coverage=1 00:23:41.413 --rc genhtml_function_coverage=1 00:23:41.413 --rc genhtml_legend=1 00:23:41.413 --rc geninfo_all_blocks=1 00:23:41.413 --rc geninfo_unexecuted_blocks=1 00:23:41.413 00:23:41.413 ' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.413 --rc genhtml_branch_coverage=1 00:23:41.413 --rc genhtml_function_coverage=1 00:23:41.413 --rc genhtml_legend=1 00:23:41.413 --rc geninfo_all_blocks=1 00:23:41.413 --rc geninfo_unexecuted_blocks=1 00:23:41.413 00:23:41.413 ' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.413 --rc genhtml_branch_coverage=1 00:23:41.413 --rc genhtml_function_coverage=1 00:23:41.413 --rc genhtml_legend=1 00:23:41.413 --rc geninfo_all_blocks=1 00:23:41.413 --rc geninfo_unexecuted_blocks=1 00:23:41.413 00:23:41.413 ' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.413 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.983 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:47.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:47.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:47.984 Found net devices under 0000:af:00.0: cvl_0_0 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:47.984 Found net devices under 0000:af:00.1: cvl_0_1 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.984 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:47.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:23:47.984 00:23:47.984 --- 10.0.0.2 ping statistics --- 00:23:47.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.984 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:23:47.984 00:23:47.984 --- 10.0.0.1 ping statistics --- 00:23:47.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.984 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=395273 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 395273 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 395273 ']' 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.984 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.984 [2024-12-16 12:45:13.291683] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:47.984 [2024-12-16 12:45:13.291729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.984 [2024-12-16 12:45:13.365395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.984 [2024-12-16 12:45:13.406192] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.984 [2024-12-16 12:45:13.406231] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.985 [2024-12-16 12:45:13.406238] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.985 [2024-12-16 12:45:13.406244] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.985 [2024-12-16 12:45:13.406249] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.985 [2024-12-16 12:45:13.406284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:47.985 true 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:47.985 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:48.244 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:48.244 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:48.244 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:48.244 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:48.244 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:48.503 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:48.503 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:48.762 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:48.762 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:48.763 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:48.763 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:49.022 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:49.022 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:49.022 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:49.022 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:49.022 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:49.280 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:49.280 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:49.280 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:49.538 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:49.539 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:49.539 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:23:49.539 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:49.539 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Y0dmvEgH0d 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6O0RWZeqGT 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Y0dmvEgH0d 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6O0RWZeqGT 00:23:49.797 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:49.798 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:50.056 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Y0dmvEgH0d 00:23:50.056 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Y0dmvEgH0d 00:23:50.056 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:50.315 [2024-12-16 12:45:16.269047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.315 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:50.574 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:50.574 [2024-12-16 12:45:16.633975] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.574 [2024-12-16 12:45:16.634214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.833 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:50.833 malloc0 00:23:50.833 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:51.091 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Y0dmvEgH0d 00:23:51.350 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.350 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Y0dmvEgH0d 00:24:03.556 Initializing NVMe Controllers 00:24:03.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.556 Initialization complete. Launching workers. 00:24:03.556 ======================================================== 00:24:03.556 Latency(us) 00:24:03.556 Device Information : IOPS MiB/s Average min max 00:24:03.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16907.07 66.04 3785.45 959.72 6179.24 00:24:03.556 ======================================================== 00:24:03.556 Total : 16907.07 66.04 3785.45 959.72 6179.24 00:24:03.556 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y0dmvEgH0d 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Y0dmvEgH0d 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=397555 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 397555 /var/tmp/bdevperf.sock 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 397555 ']' 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.556 [2024-12-16 12:45:27.568814] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:03.556 [2024-12-16 12:45:27.568865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397555 ] 00:24:03.556 [2024-12-16 12:45:27.635776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.556 [2024-12-16 12:45:27.675715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:03.556 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y0dmvEgH0d 00:24:03.557 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.557 [2024-12-16 12:45:28.113385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.557 TLSTESTn1 00:24:03.557 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:03.557 Running I/O for 10 seconds... 00:24:04.495 5104.00 IOPS, 19.94 MiB/s [2024-12-16T11:45:31.498Z] 5024.00 IOPS, 19.62 MiB/s [2024-12-16T11:45:32.434Z] 5060.67 IOPS, 19.77 MiB/s [2024-12-16T11:45:33.371Z] 5002.00 IOPS, 19.54 MiB/s [2024-12-16T11:45:34.747Z] 5004.60 IOPS, 19.55 MiB/s [2024-12-16T11:45:35.681Z] 4939.17 IOPS, 19.29 MiB/s [2024-12-16T11:45:36.617Z] 4978.14 IOPS, 19.45 MiB/s [2024-12-16T11:45:37.552Z] 4992.88 IOPS, 19.50 MiB/s [2024-12-16T11:45:38.487Z] 4993.78 IOPS, 19.51 MiB/s [2024-12-16T11:45:38.487Z] 4995.10 IOPS, 19.51 MiB/s 00:24:12.420 Latency(us) 00:24:12.420 [2024-12-16T11:45:38.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.420 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.420 Verification LBA range: start 0x0 length 0x2000 00:24:12.420 TLSTESTn1 : 10.02 4999.07 19.53 0.00 0.00 25568.57 5055.63 43690.67 00:24:12.420 [2024-12-16T11:45:38.487Z] =================================================================================================================== 00:24:12.420 [2024-12-16T11:45:38.487Z] Total : 4999.07 19.53 0.00 0.00 25568.57 5055.63 43690.67 00:24:12.420 { 00:24:12.420 "results": [ 00:24:12.420 { 00:24:12.420 "job": "TLSTESTn1", 00:24:12.420 "core_mask": "0x4", 00:24:12.420 "workload": "verify", 00:24:12.420 "status": "finished", 00:24:12.420 "verify_range": { 00:24:12.420 "start": 0, 00:24:12.420 "length": 8192 00:24:12.420 }, 00:24:12.420 "queue_depth": 128, 00:24:12.420 "io_size": 4096, 00:24:12.420 "runtime": 10.017662, 00:24:12.420 "iops": 4999.070641433101, 00:24:12.420 "mibps": 19.52761969309805, 00:24:12.420 "io_failed": 0, 00:24:12.420 "io_timeout": 0, 00:24:12.420 "avg_latency_us": 25568.565119454117, 00:24:12.420 "min_latency_us": 5055.634285714285, 00:24:12.420 "max_latency_us": 43690.666666666664 00:24:12.420 } 00:24:12.420 ], 00:24:12.421 "core_count": 1 00:24:12.421 } 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 397555 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 397555 ']' 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 397555 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 397555 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 397555' 00:24:12.421 killing process with pid 397555 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 397555 00:24:12.421 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.421 00:24:12.421 Latency(us) 00:24:12.421 [2024-12-16T11:45:38.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.421 [2024-12-16T11:45:38.488Z] =================================================================================================================== 00:24:12.421 [2024-12-16T11:45:38.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.421 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 397555 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6O0RWZeqGT 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6O0RWZeqGT 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6O0RWZeqGT 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6O0RWZeqGT 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=399325 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 399325 /var/tmp/bdevperf.sock 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399325 ']' 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.680 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.680 [2024-12-16 12:45:38.639544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:12.680 [2024-12-16 12:45:38.639592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399325 ] 00:24:12.680 [2024-12-16 12:45:38.703360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.680 [2024-12-16 12:45:38.742593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.938 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.938 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.938 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6O0RWZeqGT 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.197 [2024-12-16 12:45:39.191165] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.197 [2024-12-16 12:45:39.195873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:13.197 [2024-12-16 12:45:39.196505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeeaba0 (107): Transport endpoint is not connected 00:24:13.197 [2024-12-16 12:45:39.197496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeeaba0 (9): Bad file descriptor 00:24:13.197 [2024-12-16 12:45:39.198497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:13.197 [2024-12-16 12:45:39.198507] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:13.197 [2024-12-16 12:45:39.198514] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:13.197 [2024-12-16 12:45:39.198525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.197 request: 00:24:13.197 { 00:24:13.197 "name": "TLSTEST", 00:24:13.197 "trtype": "tcp", 00:24:13.197 "traddr": "10.0.0.2", 00:24:13.197 "adrfam": "ipv4", 00:24:13.197 "trsvcid": "4420", 00:24:13.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.197 "prchk_reftag": false, 00:24:13.197 "prchk_guard": false, 00:24:13.197 "hdgst": false, 00:24:13.197 "ddgst": false, 00:24:13.197 "psk": "key0", 00:24:13.197 "allow_unrecognized_csi": false, 00:24:13.197 "method": "bdev_nvme_attach_controller", 00:24:13.197 "req_id": 1 00:24:13.197 } 00:24:13.197 Got JSON-RPC error response 00:24:13.197 response: 00:24:13.197 { 00:24:13.197 "code": -5, 00:24:13.197 "message": "Input/output error" 00:24:13.197 } 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 399325 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399325 ']' 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399325 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.197 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399325 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399325' 00:24:13.456 killing process with pid 399325 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399325 00:24:13.456 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.456 00:24:13.456 Latency(us) 00:24:13.456 [2024-12-16T11:45:39.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.456 [2024-12-16T11:45:39.523Z] =================================================================================================================== 00:24:13.456 [2024-12-16T11:45:39.523Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399325 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Y0dmvEgH0d 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Y0dmvEgH0d 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Y0dmvEgH0d 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Y0dmvEgH0d 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=399353 00:24:13.456 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 399353 /var/tmp/bdevperf.sock 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399353 ']' 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.457 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.457 [2024-12-16 12:45:39.484915] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:13.457 [2024-12-16 12:45:39.484960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399353 ] 00:24:13.715 [2024-12-16 12:45:39.547536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.715 [2024-12-16 12:45:39.583379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.715 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.715 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:13.715 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y0dmvEgH0d 00:24:13.974 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:14.233 [2024-12-16 12:45:40.064357] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.233 [2024-12-16 12:45:40.075508] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:14.233 [2024-12-16 12:45:40.075534] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:14.233 [2024-12-16 12:45:40.075558] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:14.233 [2024-12-16 12:45:40.075786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7dba0 (107): Transport endpoint is not connected 00:24:14.233 [2024-12-16 12:45:40.076778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7dba0 (9): Bad file descriptor 00:24:14.233 [2024-12-16 12:45:40.077779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.233 [2024-12-16 12:45:40.077789] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:14.233 [2024-12-16 12:45:40.077797] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:14.233 [2024-12-16 12:45:40.077807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.233 request: 00:24:14.233 { 00:24:14.233 "name": "TLSTEST", 00:24:14.233 "trtype": "tcp", 00:24:14.233 "traddr": "10.0.0.2", 00:24:14.233 "adrfam": "ipv4", 00:24:14.233 "trsvcid": "4420", 00:24:14.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.233 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:14.233 "prchk_reftag": false, 00:24:14.233 "prchk_guard": false, 00:24:14.233 "hdgst": false, 00:24:14.233 "ddgst": false, 00:24:14.233 "psk": "key0", 00:24:14.233 "allow_unrecognized_csi": false, 00:24:14.233 "method": "bdev_nvme_attach_controller", 00:24:14.233 "req_id": 1 00:24:14.233 } 00:24:14.233 Got JSON-RPC error response 00:24:14.233 response: 00:24:14.233 { 00:24:14.233 "code": -5, 00:24:14.233 "message": "Input/output error" 00:24:14.233 } 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 399353 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399353 ']' 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399353 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399353 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399353' 00:24:14.233 killing process with pid 399353 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399353 00:24:14.233 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.233 00:24:14.233 Latency(us) 00:24:14.233 [2024-12-16T11:45:40.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.233 [2024-12-16T11:45:40.300Z] =================================================================================================================== 00:24:14.233 [2024-12-16T11:45:40.300Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.233 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399353 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y0dmvEgH0d 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y0dmvEgH0d 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y0dmvEgH0d 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Y0dmvEgH0d 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=399581 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 399581 /var/tmp/bdevperf.sock 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399581 ']' 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.493 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.493 [2024-12-16 12:45:40.377681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:14.493 [2024-12-16 12:45:40.377726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399581 ] 00:24:14.493 [2024-12-16 12:45:40.445809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.493 [2024-12-16 12:45:40.482015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.752 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.752 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.752 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y0dmvEgH0d 00:24:14.752 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.011 [2024-12-16 12:45:40.934607] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.011 [2024-12-16 12:45:40.939182] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.011 [2024-12-16 12:45:40.939203] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.011 [2024-12-16 12:45:40.939227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:15.011 [2024-12-16 12:45:40.939889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cba0 (107): Transport endpoint is not connected 00:24:15.011 [2024-12-16 12:45:40.940883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cba0 (9): Bad file descriptor 00:24:15.011 [2024-12-16 12:45:40.941883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:15.011 [2024-12-16 12:45:40.941892] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:15.011 [2024-12-16 12:45:40.941900] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:15.011 [2024-12-16 12:45:40.941910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.011 request: 00:24:15.011 { 00:24:15.011 "name": "TLSTEST", 00:24:15.011 "trtype": "tcp", 00:24:15.011 "traddr": "10.0.0.2", 00:24:15.011 "adrfam": "ipv4", 00:24:15.011 "trsvcid": "4420", 00:24:15.011 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.011 "prchk_reftag": false, 00:24:15.011 "prchk_guard": false, 00:24:15.011 "hdgst": false, 00:24:15.011 "ddgst": false, 00:24:15.011 "psk": "key0", 00:24:15.011 "allow_unrecognized_csi": false, 00:24:15.011 "method": "bdev_nvme_attach_controller", 00:24:15.011 "req_id": 1 00:24:15.011 } 00:24:15.011 Got JSON-RPC error response 00:24:15.011 response: 00:24:15.011 { 00:24:15.011 "code": -5, 00:24:15.011 "message": "Input/output error" 00:24:15.011 } 00:24:15.011 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 399581 00:24:15.011 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399581 ']' 00:24:15.011 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399581 00:24:15.011 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.011 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.011 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399581 00:24:15.011 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:15.011 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:15.011 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399581' 00:24:15.011 killing process with pid 399581 00:24:15.011 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399581 00:24:15.011 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.011 00:24:15.011 Latency(us) 00:24:15.011 [2024-12-16T11:45:41.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.011 [2024-12-16T11:45:41.078Z] =================================================================================================================== 00:24:15.011 [2024-12-16T11:45:41.078Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.011 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399581 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=399679 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 399679 /var/tmp/bdevperf.sock 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399679 ']' 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.271 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.271 [2024-12-16 12:45:41.235511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:15.271 [2024-12-16 12:45:41.235559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399679 ] 00:24:15.271 [2024-12-16 12:45:41.304926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.530 [2024-12-16 12:45:41.339994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.530 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.530 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.530 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:15.789 [2024-12-16 12:45:41.596165] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:15.789 [2024-12-16 12:45:41.596197] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:15.789 request: 00:24:15.789 { 00:24:15.789 "name": "key0", 00:24:15.789 "path": "", 00:24:15.789 "method": "keyring_file_add_key", 00:24:15.789 "req_id": 1 00:24:15.789 } 00:24:15.789 Got JSON-RPC error response 00:24:15.789 response: 00:24:15.789 { 00:24:15.789 "code": -1, 00:24:15.789 "message": "Operation not permitted" 00:24:15.789 } 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.789 [2024-12-16 12:45:41.788742] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.789 [2024-12-16 12:45:41.788766] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:15.789 request: 00:24:15.789 { 00:24:15.789 "name": "TLSTEST", 00:24:15.789 "trtype": "tcp", 00:24:15.789 "traddr": "10.0.0.2", 00:24:15.789 "adrfam": "ipv4", 00:24:15.789 "trsvcid": "4420", 00:24:15.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.789 "prchk_reftag": false, 00:24:15.789 "prchk_guard": false, 00:24:15.789 "hdgst": false, 00:24:15.789 "ddgst": false, 00:24:15.789 "psk": "key0", 00:24:15.789 "allow_unrecognized_csi": false, 00:24:15.789 "method": "bdev_nvme_attach_controller", 00:24:15.789 "req_id": 1 00:24:15.789 } 00:24:15.789 Got JSON-RPC error response 00:24:15.789 response: 00:24:15.789 { 00:24:15.789 "code": -126, 00:24:15.789 "message": "Required key not available" 00:24:15.789 } 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 399679 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399679 ']' 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399679 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.789 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399679 00:24:16.048 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:16.048 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:16.048 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399679' 00:24:16.048 killing process with pid 399679 00:24:16.048 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399679 00:24:16.048 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.048 00:24:16.048 Latency(us) 00:24:16.048 [2024-12-16T11:45:42.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.048 [2024-12-16T11:45:42.115Z] =================================================================================================================== 00:24:16.048 [2024-12-16T11:45:42.115Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:16.048 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399679 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 395273 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 395273 ']' 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 395273 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 395273 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 395273' 00:24:16.048 killing process with pid 395273 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 395273 00:24:16.048 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 395273 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.K7nSVjnHlu 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.K7nSVjnHlu 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=399839 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 399839 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399839 ']' 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.308 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.308 [2024-12-16 12:45:42.373026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:16.308 [2024-12-16 12:45:42.373070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.567 [2024-12-16 12:45:42.443962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.567 [2024-12-16 12:45:42.482080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.567 [2024-12-16 12:45:42.482133] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.567 [2024-12-16 12:45:42.482140] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.567 [2024-12-16 12:45:42.482146] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.567 [2024-12-16 12:45:42.482152] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.567 [2024-12-16 12:45:42.482170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.K7nSVjnHlu 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.K7nSVjnHlu 00:24:16.567 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.825 [2024-12-16 12:45:42.786743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.825 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:17.084 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:17.085 [2024-12-16 12:45:43.131631] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.085 [2024-12-16 12:45:43.131858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.085 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:17.343 malloc0 00:24:17.343 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:17.602 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7nSVjnHlu 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.K7nSVjnHlu 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=400091 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 400091 /var/tmp/bdevperf.sock 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 400091 ']' 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.862 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.862 [2024-12-16 12:45:43.897890] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:17.862 [2024-12-16 12:45:43.897936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400091 ] 00:24:18.121 [2024-12-16 12:45:43.964457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.121 [2024-12-16 12:45:44.004316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.121 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.121 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.121 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:18.380 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.380 [2024-12-16 12:45:44.437979] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.639 TLSTESTn1 00:24:18.639 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:18.639 Running I/O for 10 seconds... 00:24:20.953 5240.00 IOPS, 20.47 MiB/s [2024-12-16T11:45:47.956Z] 5237.50 IOPS, 20.46 MiB/s [2024-12-16T11:45:48.892Z] 5339.00 IOPS, 20.86 MiB/s [2024-12-16T11:45:49.829Z] 5393.75 IOPS, 21.07 MiB/s [2024-12-16T11:45:50.765Z] 5353.40 IOPS, 20.91 MiB/s [2024-12-16T11:45:51.701Z] 5324.00 IOPS, 20.80 MiB/s [2024-12-16T11:45:52.635Z] 5254.29 IOPS, 20.52 MiB/s [2024-12-16T11:45:54.013Z] 5274.75 IOPS, 20.60 MiB/s [2024-12-16T11:45:54.950Z] 5227.89 IOPS, 20.42 MiB/s [2024-12-16T11:45:54.950Z] 5211.80 IOPS, 20.36 MiB/s 00:24:28.883 Latency(us) 00:24:28.883 [2024-12-16T11:45:54.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.883 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:28.883 Verification LBA range: start 0x0 length 0x2000 00:24:28.883 TLSTESTn1 : 10.02 5215.29 20.37 0.00 0.00 24506.80 5679.79 31956.60 00:24:28.883 [2024-12-16T11:45:54.950Z] =================================================================================================================== 00:24:28.883 [2024-12-16T11:45:54.950Z] Total : 5215.29 20.37 0.00 0.00 24506.80 5679.79 31956.60 00:24:28.883 { 00:24:28.883 "results": [ 00:24:28.883 { 00:24:28.883 "job": "TLSTESTn1", 00:24:28.883 "core_mask": "0x4", 00:24:28.883 "workload": "verify", 00:24:28.883 "status": "finished", 00:24:28.883 "verify_range": { 00:24:28.883 "start": 0, 00:24:28.883 "length": 8192 00:24:28.883 }, 00:24:28.883 "queue_depth": 128, 00:24:28.883 "io_size": 4096, 00:24:28.883 "runtime": 10.017661, 00:24:28.883 "iops": 5215.289277606818, 00:24:28.883 "mibps": 20.372223740651634, 00:24:28.883 "io_failed": 0, 00:24:28.883 "io_timeout": 0, 00:24:28.883 "avg_latency_us": 24506.80324981657, 00:24:28.883 "min_latency_us": 5679.786666666667, 00:24:28.883 "max_latency_us": 31956.601904761905 00:24:28.883 } 00:24:28.883 ], 00:24:28.883 "core_count": 1 00:24:28.883 } 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 400091 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 400091 ']' 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 400091 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400091 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400091' 00:24:28.883 killing process with pid 400091 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 400091 00:24:28.883 Received shutdown signal, test time was about 10.000000 seconds 00:24:28.883 00:24:28.883 Latency(us) 00:24:28.883 [2024-12-16T11:45:54.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.883 [2024-12-16T11:45:54.950Z] =================================================================================================================== 00:24:28.883 [2024-12-16T11:45:54.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 400091 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.K7nSVjnHlu 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7nSVjnHlu 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7nSVjnHlu 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7nSVjnHlu 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.K7nSVjnHlu 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=401860 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 401860 /var/tmp/bdevperf.sock 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 401860 ']' 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.883 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.142 [2024-12-16 12:45:54.963216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:29.142 [2024-12-16 12:45:54.963259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401860 ] 00:24:29.142 [2024-12-16 12:45:55.024764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.142 [2024-12-16 12:45:55.059816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.142 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.142 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:29.142 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:29.400 [2024-12-16 12:45:55.328138] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.K7nSVjnHlu': 0100666 00:24:29.400 [2024-12-16 12:45:55.328171] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:29.400 request: 00:24:29.400 { 00:24:29.400 "name": "key0", 00:24:29.400 "path": "/tmp/tmp.K7nSVjnHlu", 00:24:29.400 "method": "keyring_file_add_key", 00:24:29.400 "req_id": 1 00:24:29.400 } 00:24:29.400 Got JSON-RPC error response 00:24:29.400 response: 00:24:29.400 { 00:24:29.400 "code": -1, 00:24:29.400 "message": "Operation not permitted" 00:24:29.400 } 00:24:29.400 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.659 [2024-12-16 12:45:55.540750] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.659 [2024-12-16 12:45:55.540776] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:29.659 request: 00:24:29.659 { 00:24:29.659 "name": "TLSTEST", 00:24:29.659 "trtype": "tcp", 00:24:29.659 "traddr": "10.0.0.2", 00:24:29.659 "adrfam": "ipv4", 00:24:29.659 "trsvcid": "4420", 00:24:29.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.659 "prchk_reftag": false, 00:24:29.659 "prchk_guard": false, 00:24:29.659 "hdgst": false, 00:24:29.659 "ddgst": false, 00:24:29.659 "psk": "key0", 00:24:29.659 "allow_unrecognized_csi": false, 00:24:29.659 "method": "bdev_nvme_attach_controller", 00:24:29.659 "req_id": 1 00:24:29.659 } 00:24:29.659 Got JSON-RPC error response 00:24:29.659 response: 00:24:29.659 { 00:24:29.659 "code": -126, 00:24:29.659 "message": "Required key not available" 00:24:29.659 } 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 401860 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 401860 ']' 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 401860 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 401860 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 401860' 00:24:29.660 killing process with pid 401860 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 401860 00:24:29.660 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.660 00:24:29.660 Latency(us) 00:24:29.660 [2024-12-16T11:45:55.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.660 [2024-12-16T11:45:55.727Z] =================================================================================================================== 00:24:29.660 [2024-12-16T11:45:55.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.660 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 401860 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 399839 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399839 ']' 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399839 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399839 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399839' 00:24:29.919 killing process with pid 399839 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399839 00:24:29.919 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399839 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=402094 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 402094 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 402094 ']' 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.178 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.178 [2024-12-16 12:45:56.078228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:30.178 [2024-12-16 12:45:56.078273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.178 [2024-12-16 12:45:56.147427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.178 [2024-12-16 12:45:56.185782] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.178 [2024-12-16 12:45:56.185820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.178 [2024-12-16 12:45:56.185827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.178 [2024-12-16 12:45:56.185833] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.179 [2024-12-16 12:45:56.185839] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.179 [2024-12-16 12:45:56.185862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.K7nSVjnHlu 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.K7nSVjnHlu 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.K7nSVjnHlu 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.K7nSVjnHlu 00:24:30.438 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:30.438 [2024-12-16 12:45:56.486100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.697 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:30.697 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:30.956 [2024-12-16 12:45:56.883124] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:30.956 [2024-12-16 12:45:56.883324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.956 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:31.216 malloc0 00:24:31.216 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:31.475 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:31.475 [2024-12-16 12:45:57.465206] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.K7nSVjnHlu': 0100666 00:24:31.475 [2024-12-16 12:45:57.465232] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:31.475 request: 00:24:31.475 { 00:24:31.475 "name": "key0", 00:24:31.475 "path": "/tmp/tmp.K7nSVjnHlu", 00:24:31.475 "method": "keyring_file_add_key", 00:24:31.475 "req_id": 1 00:24:31.475 } 00:24:31.475 Got JSON-RPC error response 00:24:31.475 response: 00:24:31.475 { 00:24:31.475 "code": -1, 00:24:31.475 "message": "Operation not permitted" 00:24:31.475 } 00:24:31.475 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:31.734 [2024-12-16 12:45:57.653708] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:31.734 [2024-12-16 12:45:57.653738] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:31.734 request: 00:24:31.734 { 00:24:31.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.734 "host": "nqn.2016-06.io.spdk:host1", 00:24:31.734 "psk": "key0", 00:24:31.734 "method": "nvmf_subsystem_add_host", 00:24:31.734 "req_id": 1 00:24:31.734 } 00:24:31.734 Got JSON-RPC error response 00:24:31.734 response: 00:24:31.734 { 00:24:31.734 "code": -32603, 00:24:31.734 "message": "Internal error" 00:24:31.734 } 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 402094 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 402094 ']' 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 402094 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 402094 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 402094' 00:24:31.734 killing process with pid 402094 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 402094 00:24:31.734 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 402094 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.K7nSVjnHlu 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=402357 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 402357 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 402357 ']' 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.993 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 [2024-12-16 12:45:57.975602] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:31.993 [2024-12-16 12:45:57.975646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.993 [2024-12-16 12:45:58.047933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.252 [2024-12-16 12:45:58.085996] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.252 [2024-12-16 12:45:58.086037] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.252 [2024-12-16 12:45:58.086043] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.252 [2024-12-16 12:45:58.086049] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.252 [2024-12-16 12:45:58.086054] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.252 [2024-12-16 12:45:58.086072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.K7nSVjnHlu 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.K7nSVjnHlu 00:24:32.252 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:32.511 [2024-12-16 12:45:58.382432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.511 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:32.769 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:32.769 [2024-12-16 12:45:58.759401] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.769 [2024-12-16 12:45:58.759631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.770 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:33.028 malloc0 00:24:33.028 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:33.287 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:33.287 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=402652 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 402652 /var/tmp/bdevperf.sock 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 402652 ']' 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.546 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.546 [2024-12-16 12:45:59.535051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:33.546 [2024-12-16 12:45:59.535100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402652 ] 00:24:33.546 [2024-12-16 12:45:59.604686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.805 [2024-12-16 12:45:59.643302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.805 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.805 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:33.805 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:34.063 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.063 [2024-12-16 12:46:00.071623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.321 TLSTESTn1 00:24:34.321 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:34.581 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:34.581 "subsystems": [ 00:24:34.581 { 00:24:34.581 "subsystem": "keyring", 00:24:34.581 "config": [ 00:24:34.581 { 00:24:34.581 "method": "keyring_file_add_key", 00:24:34.581 "params": { 00:24:34.581 "name": "key0", 00:24:34.581 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:34.581 } 00:24:34.581 } 00:24:34.581 ] 00:24:34.581 }, 00:24:34.581 { 00:24:34.581 "subsystem": "iobuf", 00:24:34.581 "config": [ 00:24:34.581 { 00:24:34.581 "method": "iobuf_set_options", 00:24:34.581 "params": { 00:24:34.581 "small_pool_count": 8192, 00:24:34.581 "large_pool_count": 1024, 00:24:34.581 "small_bufsize": 8192, 00:24:34.581 "large_bufsize": 135168 00:24:34.581 } 00:24:34.581 } 00:24:34.581 ] 00:24:34.581 }, 00:24:34.581 { 00:24:34.581 "subsystem": "sock", 00:24:34.581 "config": [ 00:24:34.581 { 00:24:34.581 "method": "sock_set_default_impl", 00:24:34.581 "params": { 00:24:34.581 "impl_name": "posix" 00:24:34.581 } 00:24:34.581 }, 00:24:34.581 { 00:24:34.581 "method": "sock_impl_set_options", 00:24:34.581 "params": { 00:24:34.581 "impl_name": "ssl", 00:24:34.581 "recv_buf_size": 4096, 00:24:34.581 "send_buf_size": 4096, 00:24:34.581 "enable_recv_pipe": true, 00:24:34.581 "enable_quickack": false, 00:24:34.581 "enable_placement_id": 0, 00:24:34.581 "enable_zerocopy_send_server": true, 00:24:34.581 "enable_zerocopy_send_client": false, 00:24:34.581 "zerocopy_threshold": 0, 00:24:34.581 "tls_version": 0, 00:24:34.581 "enable_ktls": false 00:24:34.581 } 00:24:34.581 }, 00:24:34.581 { 00:24:34.581 "method": "sock_impl_set_options", 00:24:34.581 "params": { 00:24:34.581 "impl_name": "posix", 00:24:34.581 "recv_buf_size": 2097152, 00:24:34.581 "send_buf_size": 2097152, 00:24:34.581 "enable_recv_pipe": true, 00:24:34.581 "enable_quickack": false, 00:24:34.581 "enable_placement_id": 0, 00:24:34.581 "enable_zerocopy_send_server": true, 00:24:34.581 "enable_zerocopy_send_client": false, 00:24:34.581 "zerocopy_threshold": 0, 00:24:34.581 "tls_version": 0, 00:24:34.581 "enable_ktls": false 00:24:34.581 } 00:24:34.581 } 00:24:34.581 ] 00:24:34.581 }, 00:24:34.582 { 00:24:34.582 "subsystem": "vmd", 00:24:34.582 "config": [] 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "subsystem": "accel", 00:24:34.582 "config": [ 00:24:34.582 { 00:24:34.582 "method": "accel_set_options", 00:24:34.582 "params": { 00:24:34.582 "small_cache_size": 128, 00:24:34.582 "large_cache_size": 16, 00:24:34.582 "task_count": 2048, 00:24:34.582 "sequence_count": 2048, 00:24:34.582 "buf_count": 2048 00:24:34.582 } 00:24:34.582 } 00:24:34.582 ] 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "subsystem": "bdev", 00:24:34.582 "config": [ 00:24:34.582 { 00:24:34.582 "method": "bdev_set_options", 00:24:34.582 "params": { 00:24:34.582 "bdev_io_pool_size": 65535, 00:24:34.582 "bdev_io_cache_size": 256, 00:24:34.582 "bdev_auto_examine": true, 00:24:34.582 "iobuf_small_cache_size": 128, 00:24:34.582 "iobuf_large_cache_size": 16 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "bdev_raid_set_options", 00:24:34.582 "params": { 00:24:34.582 "process_window_size_kb": 1024, 00:24:34.582 "process_max_bandwidth_mb_sec": 0 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "bdev_iscsi_set_options", 00:24:34.582 "params": { 00:24:34.582 "timeout_sec": 30 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "bdev_nvme_set_options", 00:24:34.582 "params": { 00:24:34.582 "action_on_timeout": "none", 00:24:34.582 "timeout_us": 0, 00:24:34.582 "timeout_admin_us": 0, 00:24:34.582 "keep_alive_timeout_ms": 10000, 00:24:34.582 "arbitration_burst": 0, 00:24:34.582 "low_priority_weight": 0, 00:24:34.582 "medium_priority_weight": 0, 00:24:34.582 "high_priority_weight": 0, 00:24:34.582 "nvme_adminq_poll_period_us": 10000, 00:24:34.582 "nvme_ioq_poll_period_us": 0, 00:24:34.582 "io_queue_requests": 0, 00:24:34.582 "delay_cmd_submit": true, 00:24:34.582 "transport_retry_count": 4, 00:24:34.582 "bdev_retry_count": 3, 00:24:34.582 "transport_ack_timeout": 0, 00:24:34.582 "ctrlr_loss_timeout_sec": 0, 00:24:34.582 "reconnect_delay_sec": 0, 00:24:34.582 "fast_io_fail_timeout_sec": 0, 00:24:34.582 "disable_auto_failback": false, 00:24:34.582 "generate_uuids": false, 00:24:34.582 "transport_tos": 0, 00:24:34.582 "nvme_error_stat": false, 00:24:34.582 "rdma_srq_size": 0, 00:24:34.582 "io_path_stat": false, 00:24:34.582 "allow_accel_sequence": false, 00:24:34.582 "rdma_max_cq_size": 0, 00:24:34.582 "rdma_cm_event_timeout_ms": 0, 00:24:34.582 "dhchap_digests": [ 00:24:34.582 "sha256", 00:24:34.582 "sha384", 00:24:34.582 "sha512" 00:24:34.582 ], 00:24:34.582 "dhchap_dhgroups": [ 00:24:34.582 "null", 00:24:34.582 "ffdhe2048", 00:24:34.582 "ffdhe3072", 00:24:34.582 "ffdhe4096", 00:24:34.582 "ffdhe6144", 00:24:34.582 "ffdhe8192" 00:24:34.582 ] 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "bdev_nvme_set_hotplug", 00:24:34.582 "params": { 00:24:34.582 "period_us": 100000, 00:24:34.582 "enable": false 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "bdev_malloc_create", 00:24:34.582 "params": { 00:24:34.582 "name": "malloc0", 00:24:34.582 "num_blocks": 8192, 00:24:34.582 "block_size": 4096, 00:24:34.582 "physical_block_size": 4096, 00:24:34.582 "uuid": "a9f7beef-a59d-4e49-88d4-f6e1f795b5a2", 00:24:34.582 "optimal_io_boundary": 0, 00:24:34.582 "md_size": 0, 00:24:34.582 "dif_type": 0, 00:24:34.582 "dif_is_head_of_md": false, 00:24:34.582 "dif_pi_format": 0 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "bdev_wait_for_examine" 00:24:34.582 } 00:24:34.582 ] 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "subsystem": "nbd", 00:24:34.582 "config": [] 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "subsystem": "scheduler", 00:24:34.582 "config": [ 00:24:34.582 { 00:24:34.582 "method": "framework_set_scheduler", 00:24:34.582 "params": { 00:24:34.582 "name": "static" 00:24:34.582 } 00:24:34.582 } 00:24:34.582 ] 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "subsystem": "nvmf", 00:24:34.582 "config": [ 00:24:34.582 { 00:24:34.582 "method": "nvmf_set_config", 00:24:34.582 "params": { 00:24:34.582 "discovery_filter": "match_any", 00:24:34.582 "admin_cmd_passthru": { 00:24:34.582 "identify_ctrlr": false 00:24:34.582 }, 00:24:34.582 "dhchap_digests": [ 00:24:34.582 "sha256", 00:24:34.582 "sha384", 00:24:34.582 "sha512" 00:24:34.582 ], 00:24:34.582 "dhchap_dhgroups": [ 00:24:34.582 "null", 00:24:34.582 "ffdhe2048", 00:24:34.582 "ffdhe3072", 00:24:34.582 "ffdhe4096", 00:24:34.582 "ffdhe6144", 00:24:34.582 "ffdhe8192" 00:24:34.582 ] 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_set_max_subsystems", 00:24:34.582 "params": { 00:24:34.582 "max_subsystems": 1024 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_set_crdt", 00:24:34.582 "params": { 00:24:34.582 "crdt1": 0, 00:24:34.582 "crdt2": 0, 00:24:34.582 "crdt3": 0 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_create_transport", 00:24:34.582 "params": { 00:24:34.582 "trtype": "TCP", 00:24:34.582 "max_queue_depth": 128, 00:24:34.582 "max_io_qpairs_per_ctrlr": 127, 00:24:34.582 "in_capsule_data_size": 4096, 00:24:34.582 "max_io_size": 131072, 00:24:34.582 "io_unit_size": 131072, 00:24:34.582 "max_aq_depth": 128, 00:24:34.582 "num_shared_buffers": 511, 00:24:34.582 "buf_cache_size": 4294967295, 00:24:34.582 "dif_insert_or_strip": false, 00:24:34.582 "zcopy": false, 00:24:34.582 "c2h_success": false, 00:24:34.582 "sock_priority": 0, 00:24:34.582 "abort_timeout_sec": 1, 00:24:34.582 "ack_timeout": 0, 00:24:34.582 "data_wr_pool_size": 0 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_create_subsystem", 00:24:34.582 "params": { 00:24:34.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.582 "allow_any_host": false, 00:24:34.582 "serial_number": "SPDK00000000000001", 00:24:34.582 "model_number": "SPDK bdev Controller", 00:24:34.582 "max_namespaces": 10, 00:24:34.582 "min_cntlid": 1, 00:24:34.582 "max_cntlid": 65519, 00:24:34.582 "ana_reporting": false 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_subsystem_add_host", 00:24:34.582 "params": { 00:24:34.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.582 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.582 "psk": "key0" 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_subsystem_add_ns", 00:24:34.582 "params": { 00:24:34.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.582 "namespace": { 00:24:34.582 "nsid": 1, 00:24:34.582 "bdev_name": "malloc0", 00:24:34.582 "nguid": "A9F7BEEFA59D4E4988D4F6E1F795B5A2", 00:24:34.582 "uuid": "a9f7beef-a59d-4e49-88d4-f6e1f795b5a2", 00:24:34.582 "no_auto_visible": false 00:24:34.582 } 00:24:34.582 } 00:24:34.582 }, 00:24:34.582 { 00:24:34.582 "method": "nvmf_subsystem_add_listener", 00:24:34.582 "params": { 00:24:34.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.582 "listen_address": { 00:24:34.582 "trtype": "TCP", 00:24:34.582 "adrfam": "IPv4", 00:24:34.582 "traddr": "10.0.0.2", 00:24:34.582 "trsvcid": "4420" 00:24:34.582 }, 00:24:34.582 "secure_channel": true 00:24:34.582 } 00:24:34.582 } 00:24:34.582 ] 00:24:34.582 } 00:24:34.582 ] 00:24:34.582 }' 00:24:34.582 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:34.842 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:34.842 "subsystems": [ 00:24:34.842 { 00:24:34.842 "subsystem": "keyring", 00:24:34.842 "config": [ 00:24:34.842 { 00:24:34.842 "method": "keyring_file_add_key", 00:24:34.842 "params": { 00:24:34.842 "name": "key0", 00:24:34.842 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:34.842 } 00:24:34.842 } 00:24:34.842 ] 00:24:34.842 }, 00:24:34.842 { 00:24:34.842 "subsystem": "iobuf", 00:24:34.842 "config": [ 00:24:34.842 { 00:24:34.842 "method": "iobuf_set_options", 00:24:34.842 "params": { 00:24:34.842 "small_pool_count": 8192, 00:24:34.842 "large_pool_count": 1024, 00:24:34.842 "small_bufsize": 8192, 00:24:34.842 "large_bufsize": 135168 00:24:34.842 } 00:24:34.842 } 00:24:34.842 ] 00:24:34.842 }, 00:24:34.842 { 00:24:34.842 "subsystem": "sock", 00:24:34.842 "config": [ 00:24:34.842 { 00:24:34.842 "method": "sock_set_default_impl", 00:24:34.842 "params": { 00:24:34.842 "impl_name": "posix" 00:24:34.842 } 00:24:34.842 }, 00:24:34.842 { 00:24:34.842 "method": "sock_impl_set_options", 00:24:34.842 "params": { 00:24:34.842 "impl_name": "ssl", 00:24:34.842 "recv_buf_size": 4096, 00:24:34.842 "send_buf_size": 4096, 00:24:34.842 "enable_recv_pipe": true, 00:24:34.842 "enable_quickack": false, 00:24:34.842 "enable_placement_id": 0, 00:24:34.842 "enable_zerocopy_send_server": true, 00:24:34.842 "enable_zerocopy_send_client": false, 00:24:34.842 "zerocopy_threshold": 0, 00:24:34.842 "tls_version": 0, 00:24:34.842 "enable_ktls": false 00:24:34.842 } 00:24:34.842 }, 00:24:34.842 { 00:24:34.842 "method": "sock_impl_set_options", 00:24:34.843 "params": { 00:24:34.843 "impl_name": "posix", 00:24:34.843 "recv_buf_size": 2097152, 00:24:34.843 "send_buf_size": 2097152, 00:24:34.843 "enable_recv_pipe": true, 00:24:34.843 "enable_quickack": false, 00:24:34.843 "enable_placement_id": 0, 00:24:34.843 "enable_zerocopy_send_server": true, 00:24:34.843 "enable_zerocopy_send_client": false, 00:24:34.843 "zerocopy_threshold": 0, 00:24:34.843 "tls_version": 0, 00:24:34.843 "enable_ktls": false 00:24:34.843 } 00:24:34.843 } 00:24:34.843 ] 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "subsystem": "vmd", 00:24:34.843 "config": [] 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "subsystem": "accel", 00:24:34.843 "config": [ 00:24:34.843 { 00:24:34.843 "method": "accel_set_options", 00:24:34.843 "params": { 00:24:34.843 "small_cache_size": 128, 00:24:34.843 "large_cache_size": 16, 00:24:34.843 "task_count": 2048, 00:24:34.843 "sequence_count": 2048, 00:24:34.843 "buf_count": 2048 00:24:34.843 } 00:24:34.843 } 00:24:34.843 ] 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "subsystem": "bdev", 00:24:34.843 "config": [ 00:24:34.843 { 00:24:34.843 "method": "bdev_set_options", 00:24:34.843 "params": { 00:24:34.843 "bdev_io_pool_size": 65535, 00:24:34.843 "bdev_io_cache_size": 256, 00:24:34.843 "bdev_auto_examine": true, 00:24:34.843 "iobuf_small_cache_size": 128, 00:24:34.843 "iobuf_large_cache_size": 16 00:24:34.843 } 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "method": "bdev_raid_set_options", 00:24:34.843 "params": { 00:24:34.843 "process_window_size_kb": 1024, 00:24:34.843 "process_max_bandwidth_mb_sec": 0 00:24:34.843 } 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "method": "bdev_iscsi_set_options", 00:24:34.843 "params": { 00:24:34.843 "timeout_sec": 30 00:24:34.843 } 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "method": "bdev_nvme_set_options", 00:24:34.843 "params": { 00:24:34.843 "action_on_timeout": "none", 00:24:34.843 "timeout_us": 0, 00:24:34.843 "timeout_admin_us": 0, 00:24:34.843 "keep_alive_timeout_ms": 10000, 00:24:34.843 "arbitration_burst": 0, 00:24:34.843 "low_priority_weight": 0, 00:24:34.843 "medium_priority_weight": 0, 00:24:34.843 "high_priority_weight": 0, 00:24:34.843 "nvme_adminq_poll_period_us": 10000, 00:24:34.843 "nvme_ioq_poll_period_us": 0, 00:24:34.843 "io_queue_requests": 512, 00:24:34.843 "delay_cmd_submit": true, 00:24:34.843 "transport_retry_count": 4, 00:24:34.843 "bdev_retry_count": 3, 00:24:34.843 "transport_ack_timeout": 0, 00:24:34.843 "ctrlr_loss_timeout_sec": 0, 00:24:34.843 "reconnect_delay_sec": 0, 00:24:34.843 "fast_io_fail_timeout_sec": 0, 00:24:34.843 "disable_auto_failback": false, 00:24:34.843 "generate_uuids": false, 00:24:34.843 "transport_tos": 0, 00:24:34.843 "nvme_error_stat": false, 00:24:34.843 "rdma_srq_size": 0, 00:24:34.843 "io_path_stat": false, 00:24:34.843 "allow_accel_sequence": false, 00:24:34.843 "rdma_max_cq_size": 0, 00:24:34.843 "rdma_cm_event_timeout_ms": 0, 00:24:34.843 "dhchap_digests": [ 00:24:34.843 "sha256", 00:24:34.843 "sha384", 00:24:34.843 "sha512" 00:24:34.843 ], 00:24:34.843 "dhchap_dhgroups": [ 00:24:34.843 "null", 00:24:34.843 "ffdhe2048", 00:24:34.843 "ffdhe3072", 00:24:34.843 "ffdhe4096", 00:24:34.843 "ffdhe6144", 00:24:34.843 "ffdhe8192" 00:24:34.843 ] 00:24:34.843 } 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "method": "bdev_nvme_attach_controller", 00:24:34.843 "params": { 00:24:34.843 "name": "TLSTEST", 00:24:34.843 "trtype": "TCP", 00:24:34.843 "adrfam": "IPv4", 00:24:34.843 "traddr": "10.0.0.2", 00:24:34.843 "trsvcid": "4420", 00:24:34.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.843 "prchk_reftag": false, 00:24:34.843 "prchk_guard": false, 00:24:34.843 "ctrlr_loss_timeout_sec": 0, 00:24:34.843 "reconnect_delay_sec": 0, 00:24:34.843 "fast_io_fail_timeout_sec": 0, 00:24:34.843 "psk": "key0", 00:24:34.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.843 "hdgst": false, 00:24:34.843 "ddgst": false 00:24:34.843 } 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "method": "bdev_nvme_set_hotplug", 00:24:34.843 "params": { 00:24:34.843 "period_us": 100000, 00:24:34.843 "enable": false 00:24:34.843 } 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "method": "bdev_wait_for_examine" 00:24:34.843 } 00:24:34.843 ] 00:24:34.843 }, 00:24:34.843 { 00:24:34.843 "subsystem": "nbd", 00:24:34.843 "config": [] 00:24:34.843 } 00:24:34.843 ] 00:24:34.843 }' 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 402652 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 402652 ']' 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 402652 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 402652 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 402652' 00:24:34.843 killing process with pid 402652 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 402652 00:24:34.843 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.843 00:24:34.843 Latency(us) 00:24:34.843 [2024-12-16T11:46:00.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.843 [2024-12-16T11:46:00.910Z] =================================================================================================================== 00:24:34.843 [2024-12-16T11:46:00.910Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:34.843 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 402652 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 402357 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 402357 ']' 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 402357 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 402357 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 402357' 00:24:35.104 killing process with pid 402357 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 402357 00:24:35.104 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 402357 00:24:35.104 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:35.104 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:35.104 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.104 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:35.104 "subsystems": [ 00:24:35.104 { 00:24:35.104 "subsystem": "keyring", 00:24:35.104 "config": [ 00:24:35.104 { 00:24:35.104 "method": "keyring_file_add_key", 00:24:35.104 "params": { 00:24:35.104 "name": "key0", 00:24:35.104 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:35.104 } 00:24:35.104 } 00:24:35.104 ] 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "subsystem": "iobuf", 00:24:35.104 "config": [ 00:24:35.104 { 00:24:35.104 "method": "iobuf_set_options", 00:24:35.104 "params": { 00:24:35.104 "small_pool_count": 8192, 00:24:35.104 "large_pool_count": 1024, 00:24:35.104 "small_bufsize": 8192, 00:24:35.104 "large_bufsize": 135168 00:24:35.104 } 00:24:35.104 } 00:24:35.104 ] 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "subsystem": "sock", 00:24:35.104 "config": [ 00:24:35.104 { 00:24:35.104 "method": "sock_set_default_impl", 00:24:35.104 "params": { 00:24:35.104 "impl_name": "posix" 00:24:35.104 } 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "method": "sock_impl_set_options", 00:24:35.104 "params": { 00:24:35.104 "impl_name": "ssl", 00:24:35.104 "recv_buf_size": 4096, 00:24:35.104 "send_buf_size": 4096, 00:24:35.104 "enable_recv_pipe": true, 00:24:35.104 "enable_quickack": false, 00:24:35.104 "enable_placement_id": 0, 00:24:35.104 "enable_zerocopy_send_server": true, 00:24:35.104 "enable_zerocopy_send_client": false, 00:24:35.104 "zerocopy_threshold": 0, 00:24:35.104 "tls_version": 0, 00:24:35.104 "enable_ktls": false 00:24:35.104 } 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "method": "sock_impl_set_options", 00:24:35.104 "params": { 00:24:35.104 "impl_name": "posix", 00:24:35.104 "recv_buf_size": 2097152, 00:24:35.104 "send_buf_size": 2097152, 00:24:35.104 "enable_recv_pipe": true, 00:24:35.104 "enable_quickack": false, 00:24:35.104 "enable_placement_id": 0, 00:24:35.104 "enable_zerocopy_send_server": true, 00:24:35.104 "enable_zerocopy_send_client": false, 00:24:35.104 "zerocopy_threshold": 0, 00:24:35.104 "tls_version": 0, 00:24:35.104 "enable_ktls": false 00:24:35.104 } 00:24:35.104 } 00:24:35.104 ] 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "subsystem": "vmd", 00:24:35.104 "config": [] 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "subsystem": "accel", 00:24:35.104 "config": [ 00:24:35.104 { 00:24:35.104 "method": "accel_set_options", 00:24:35.104 "params": { 00:24:35.104 "small_cache_size": 128, 00:24:35.104 "large_cache_size": 16, 00:24:35.104 "task_count": 2048, 00:24:35.104 "sequence_count": 2048, 00:24:35.104 "buf_count": 2048 00:24:35.104 } 00:24:35.104 } 00:24:35.104 ] 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "subsystem": "bdev", 00:24:35.104 "config": [ 00:24:35.104 { 00:24:35.104 "method": "bdev_set_options", 00:24:35.104 "params": { 00:24:35.104 "bdev_io_pool_size": 65535, 00:24:35.104 "bdev_io_cache_size": 256, 00:24:35.104 "bdev_auto_examine": true, 00:24:35.104 "iobuf_small_cache_size": 128, 00:24:35.104 "iobuf_large_cache_size": 16 00:24:35.104 } 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "method": "bdev_raid_set_options", 00:24:35.104 "params": { 00:24:35.104 "process_window_size_kb": 1024, 00:24:35.104 "process_max_bandwidth_mb_sec": 0 00:24:35.104 } 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "method": "bdev_iscsi_set_options", 00:24:35.104 "params": { 00:24:35.104 "timeout_sec": 30 00:24:35.104 } 00:24:35.104 }, 00:24:35.104 { 00:24:35.104 "method": "bdev_nvme_set_options", 00:24:35.104 "params": { 00:24:35.104 "action_on_timeout": "none", 00:24:35.104 "timeout_us": 0, 00:24:35.104 "timeout_admin_us": 0, 00:24:35.104 "keep_alive_timeout_ms": 10000, 00:24:35.104 "arbitration_burst": 0, 00:24:35.104 "low_priority_weight": 0, 00:24:35.104 "medium_priority_weight": 0, 00:24:35.104 "high_priority_weight": 0, 00:24:35.104 "nvme_adminq_poll_period_us": 10000, 00:24:35.104 "nvme_ioq_poll_period_us": 0, 00:24:35.104 "io_queue_requests": 0, 00:24:35.104 "delay_cmd_submit": true, 00:24:35.104 "transport_retry_count": 4, 00:24:35.104 "bdev_retry_count": 3, 00:24:35.104 "transport_ack_timeout": 0, 00:24:35.104 "ctrlr_loss_timeout_sec": 0, 00:24:35.104 "reconnect_delay_sec": 0, 00:24:35.104 "fast_io_fail_timeout_sec": 0, 00:24:35.104 "disable_auto_failback": false, 00:24:35.104 "generate_uuids": false, 00:24:35.104 "transport_tos": 0, 00:24:35.104 "nvme_error_stat": false, 00:24:35.104 "rdma_srq_size": 0, 00:24:35.104 "io_path_stat": false, 00:24:35.104 "allow_accel_sequence": false, 00:24:35.104 "rdma_max_cq_size": 0, 00:24:35.104 "rdma_cm_event_timeout_ms": 0, 00:24:35.104 "dhchap_digests": [ 00:24:35.105 "sha256", 00:24:35.105 "sha384", 00:24:35.105 "sha512" 00:24:35.105 ], 00:24:35.105 "dhchap_dhgroups": [ 00:24:35.105 "null", 00:24:35.105 "ffdhe2048", 00:24:35.105 "ffdhe3072", 00:24:35.105 "ffdhe4096", 00:24:35.105 "ffdhe6144", 00:24:35.105 "ffdhe8192" 00:24:35.105 ] 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "bdev_nvme_set_hotplug", 00:24:35.105 "params": { 00:24:35.105 "period_us": 100000, 00:24:35.105 "enable": false 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "bdev_malloc_create", 00:24:35.105 "params": { 00:24:35.105 "name": "malloc0", 00:24:35.105 "num_blocks": 8192, 00:24:35.105 "block_size": 4096, 00:24:35.105 "physical_block_size": 4096, 00:24:35.105 "uuid": "a9f7beef-a59d-4e49-88d4-f6e1f795b5a2", 00:24:35.105 "optimal_io_boundary": 0, 00:24:35.105 "md_size": 0, 00:24:35.105 "dif_type": 0, 00:24:35.105 "dif_is_head_of_md": false, 00:24:35.105 "dif_pi_format": 0 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "bdev_wait_for_examine" 00:24:35.105 } 00:24:35.105 ] 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "subsystem": "nbd", 00:24:35.105 "config": [] 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "subsystem": "scheduler", 00:24:35.105 "config": [ 00:24:35.105 { 00:24:35.105 "method": "framework_set_scheduler", 00:24:35.105 "params": { 00:24:35.105 "name": "static" 00:24:35.105 } 00:24:35.105 } 00:24:35.105 ] 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "subsystem": "nvmf", 00:24:35.105 "config": [ 00:24:35.105 { 00:24:35.105 "method": "nvmf_set_config", 00:24:35.105 "params": { 00:24:35.105 "discovery_filter": "match_any", 00:24:35.105 "admin_cmd_passthru": { 00:24:35.105 "identify_ctrlr": false 00:24:35.105 }, 00:24:35.105 "dhchap_digests": [ 00:24:35.105 "sha256", 00:24:35.105 "sha384", 00:24:35.105 "sha512" 00:24:35.105 ], 00:24:35.105 "dhchap_dhgroups": [ 00:24:35.105 "null", 00:24:35.105 "ffdhe2048", 00:24:35.105 "ffdhe3072", 00:24:35.105 "ffdhe4096", 00:24:35.105 "ffdhe6144", 00:24:35.105 "ffdhe8192" 00:24:35.105 ] 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_set_max_subsystems", 00:24:35.105 "params": { 00:24:35.105 "max_subsystems": 1024 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_set_crdt", 00:24:35.105 "params": { 00:24:35.105 "crdt1": 0, 00:24:35.105 "crdt2": 0, 00:24:35.105 "crdt3": 0 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_create_transport", 00:24:35.105 "params": { 00:24:35.105 "trtype": "TCP", 00:24:35.105 "max_queue_depth": 128, 00:24:35.105 "max_io_qpairs_per_ctrlr": 127, 00:24:35.105 "in_capsule_data_size": 4096, 00:24:35.105 "max_io_size": 131072, 00:24:35.105 "io_unit_size": 131072, 00:24:35.105 "max_aq_depth": 128, 00:24:35.105 "num_shared_buffers": 511, 00:24:35.105 "buf_cache_size": 4294967295, 00:24:35.105 "dif_insert_or_strip": false, 00:24:35.105 "zcopy": false, 00:24:35.105 "c2h_success": false, 00:24:35.105 "sock_priority": 0, 00:24:35.105 "abort_timeout_sec": 1, 00:24:35.105 "ack_timeout": 0, 00:24:35.105 "data_wr_pool_size": 0 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_create_subsystem", 00:24:35.105 "params": { 00:24:35.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.105 "allow_any_host": false, 00:24:35.105 "serial_number": "SPDK00000000000001", 00:24:35.105 "model_number": "SPDK bdev Controller", 00:24:35.105 "max_namespaces": 10, 00:24:35.105 "min_cntlid": 1, 00:24:35.105 "max_cntlid": 65519, 00:24:35.105 "ana_reporting": false 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_subsystem_add_host", 00:24:35.105 "params": { 00:24:35.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.105 "host": "nqn.2016-06.io.spdk:host1", 00:24:35.105 "psk": "key0" 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_subsystem_add_ns", 00:24:35.105 "params": { 00:24:35.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.105 "namespace": { 00:24:35.105 "nsid": 1, 00:24:35.105 "bdev_name": "malloc0", 00:24:35.105 "nguid": "A9F7BEEFA59D4E4988D4F6E1F795B5A2", 00:24:35.105 "uuid": "a9f7beef-a59d-4e49-88d4-f6e1f795b5a2", 00:24:35.105 "no_auto_visible": false 00:24:35.105 } 00:24:35.105 } 00:24:35.105 }, 00:24:35.105 { 00:24:35.105 "method": "nvmf_subsystem_add_listener", 00:24:35.105 "params": { 00:24:35.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.105 "listen_address": { 00:24:35.105 "trtype": "TCP", 00:24:35.105 "adrfam": "IPv4", 00:24:35.105 "traddr": "10.0.0.2", 00:24:35.105 "trsvcid": "4420" 00:24:35.105 }, 00:24:35.105 "secure_channel": true 00:24:35.105 } 00:24:35.105 } 00:24:35.105 ] 00:24:35.105 } 00:24:35.105 ] 00:24:35.105 }' 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=403042 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 403042 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 403042 ']' 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.105 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.365 [2024-12-16 12:46:01.203750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:35.365 [2024-12-16 12:46:01.203793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.365 [2024-12-16 12:46:01.273945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.365 [2024-12-16 12:46:01.309219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.365 [2024-12-16 12:46:01.309258] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.365 [2024-12-16 12:46:01.309266] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.365 [2024-12-16 12:46:01.309271] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.365 [2024-12-16 12:46:01.309276] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.365 [2024-12-16 12:46:01.309332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.624 [2024-12-16 12:46:01.527860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.624 [2024-12-16 12:46:01.559694] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.624 [2024-12-16 12:46:01.559906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=403087 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 403087 /var/tmp/bdevperf.sock 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 403087 ']' 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.191 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:36.191 "subsystems": [ 00:24:36.191 { 00:24:36.191 "subsystem": "keyring", 00:24:36.191 "config": [ 00:24:36.191 { 00:24:36.191 "method": "keyring_file_add_key", 00:24:36.191 "params": { 00:24:36.191 "name": "key0", 00:24:36.191 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:36.191 } 00:24:36.191 } 00:24:36.191 ] 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "subsystem": "iobuf", 00:24:36.191 "config": [ 00:24:36.191 { 00:24:36.191 "method": "iobuf_set_options", 00:24:36.191 "params": { 00:24:36.191 "small_pool_count": 8192, 00:24:36.191 "large_pool_count": 1024, 00:24:36.191 "small_bufsize": 8192, 00:24:36.191 "large_bufsize": 135168 00:24:36.191 } 00:24:36.191 } 00:24:36.191 ] 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "subsystem": "sock", 00:24:36.191 "config": [ 00:24:36.191 { 00:24:36.191 "method": "sock_set_default_impl", 00:24:36.191 "params": { 00:24:36.191 "impl_name": "posix" 00:24:36.191 } 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "method": "sock_impl_set_options", 00:24:36.191 "params": { 00:24:36.191 "impl_name": "ssl", 00:24:36.191 "recv_buf_size": 4096, 00:24:36.191 "send_buf_size": 4096, 00:24:36.191 "enable_recv_pipe": true, 00:24:36.191 "enable_quickack": false, 00:24:36.191 "enable_placement_id": 0, 00:24:36.191 "enable_zerocopy_send_server": true, 00:24:36.191 "enable_zerocopy_send_client": false, 00:24:36.191 "zerocopy_threshold": 0, 00:24:36.191 "tls_version": 0, 00:24:36.191 "enable_ktls": false 00:24:36.191 } 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "method": "sock_impl_set_options", 00:24:36.191 "params": { 00:24:36.191 "impl_name": "posix", 00:24:36.191 "recv_buf_size": 2097152, 00:24:36.191 "send_buf_size": 2097152, 00:24:36.191 "enable_recv_pipe": true, 00:24:36.191 "enable_quickack": false, 00:24:36.191 "enable_placement_id": 0, 00:24:36.191 "enable_zerocopy_send_server": true, 00:24:36.191 "enable_zerocopy_send_client": false, 00:24:36.191 "zerocopy_threshold": 0, 00:24:36.191 "tls_version": 0, 00:24:36.191 "enable_ktls": false 00:24:36.191 } 00:24:36.191 } 00:24:36.191 ] 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "subsystem": "vmd", 00:24:36.191 "config": [] 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "subsystem": "accel", 00:24:36.191 "config": [ 00:24:36.191 { 00:24:36.191 "method": "accel_set_options", 00:24:36.191 "params": { 00:24:36.191 "small_cache_size": 128, 00:24:36.191 "large_cache_size": 16, 00:24:36.191 "task_count": 2048, 00:24:36.191 "sequence_count": 2048, 00:24:36.191 "buf_count": 2048 00:24:36.191 } 00:24:36.191 } 00:24:36.191 ] 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "subsystem": "bdev", 00:24:36.191 "config": [ 00:24:36.191 { 00:24:36.191 "method": "bdev_set_options", 00:24:36.191 "params": { 00:24:36.191 "bdev_io_pool_size": 65535, 00:24:36.191 "bdev_io_cache_size": 256, 00:24:36.191 "bdev_auto_examine": true, 00:24:36.191 "iobuf_small_cache_size": 128, 00:24:36.191 "iobuf_large_cache_size": 16 00:24:36.191 } 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "method": "bdev_raid_set_options", 00:24:36.191 "params": { 00:24:36.191 "process_window_size_kb": 1024, 00:24:36.191 "process_max_bandwidth_mb_sec": 0 00:24:36.191 } 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "method": "bdev_iscsi_set_options", 00:24:36.191 "params": { 00:24:36.191 "timeout_sec": 30 00:24:36.191 } 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "method": "bdev_nvme_set_options", 00:24:36.191 "params": { 00:24:36.191 "action_on_timeout": "none", 00:24:36.191 "timeout_us": 0, 00:24:36.191 "timeout_admin_us": 0, 00:24:36.191 "keep_alive_timeout_ms": 10000, 00:24:36.192 "arbitration_burst": 0, 00:24:36.192 "low_priority_weight": 0, 00:24:36.192 "medium_priority_weight": 0, 00:24:36.192 "high_priority_weight": 0, 00:24:36.192 "nvme_adminq_poll_period_us": 10000, 00:24:36.192 "nvme_ioq_poll_period_us": 0, 00:24:36.192 "io_queue_requests": 512, 00:24:36.192 "delay_cmd_submit": true, 00:24:36.192 "transport_retry_count": 4, 00:24:36.192 "bdev_retry_count": 3, 00:24:36.192 "transport_ack_timeout": 0, 00:24:36.192 "ctrlr_loss_timeout_sec": 0, 00:24:36.192 "reconnect_delay_sec": 0, 00:24:36.192 "fast_io_fail_timeout_sec": 0, 00:24:36.192 "disable_auto_failback": false, 00:24:36.192 "generate_uuids": false, 00:24:36.192 "transport_tos": 0, 00:24:36.192 "nvme_error_stat": false, 00:24:36.192 "rdma_srq_size": 0, 00:24:36.192 "io_path_stat": false, 00:24:36.192 "allow_accel_sequence": false, 00:24:36.192 "rdma_max_cq_size": 0, 00:24:36.192 "rdma_cm_event_timeout_ms": 0, 00:24:36.192 "dhchap_digests": [ 00:24:36.192 "sha256", 00:24:36.192 "sha384", 00:24:36.192 "sha512" 00:24:36.192 ], 00:24:36.192 "dhchap_dhgroups": [ 00:24:36.192 "null", 00:24:36.192 "ffdhe2048", 00:24:36.192 "ffdhe3072", 00:24:36.192 "ffdhe4096", 00:24:36.192 "ffdhe6144", 00:24:36.192 "ffdhe8192" 00:24:36.192 ] 00:24:36.192 } 00:24:36.192 }, 00:24:36.192 { 00:24:36.192 "method": "bdev_nvme_attach_controller", 00:24:36.192 "params": { 00:24:36.192 "name": "TLSTEST", 00:24:36.192 "trtype": "TCP", 00:24:36.192 "adrfam": "IPv4", 00:24:36.192 "traddr": "10.0.0.2", 00:24:36.192 "trsvcid": "4420", 00:24:36.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.192 "prchk_reftag": false, 00:24:36.192 "prchk_guard": false, 00:24:36.192 "ctrlr_loss_timeout_sec": 0, 00:24:36.192 "reconnect_delay_sec": 0, 00:24:36.192 "fast_io_fail_timeout_sec": 0, 00:24:36.192 "psk": "key0", 00:24:36.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.192 "hdgst": false, 00:24:36.192 "ddgst": false 00:24:36.192 } 00:24:36.192 }, 00:24:36.192 { 00:24:36.192 "method": "bdev_nvme_set_hotplug", 00:24:36.192 "params": { 00:24:36.192 "period_us": 100000, 00:24:36.192 "enable": false 00:24:36.192 } 00:24:36.192 }, 00:24:36.192 { 00:24:36.192 "method": "bdev_wait_for_examine" 00:24:36.192 } 00:24:36.192 ] 00:24:36.192 }, 00:24:36.192 { 00:24:36.192 "subsystem": "nbd", 00:24:36.192 "config": [] 00:24:36.192 } 00:24:36.192 ] 00:24:36.192 }' 00:24:36.192 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.192 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.192 [2024-12-16 12:46:02.115348] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:36.192 [2024-12-16 12:46:02.115389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403087 ] 00:24:36.192 [2024-12-16 12:46:02.183069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.192 [2024-12-16 12:46:02.222369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.451 [2024-12-16 12:46:02.367612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.018 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.018 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:37.018 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:37.018 Running I/O for 10 seconds... 00:24:39.330 4626.00 IOPS, 18.07 MiB/s [2024-12-16T11:46:06.332Z] 4640.00 IOPS, 18.12 MiB/s [2024-12-16T11:46:07.269Z] 4861.00 IOPS, 18.99 MiB/s [2024-12-16T11:46:08.206Z] 5025.00 IOPS, 19.63 MiB/s [2024-12-16T11:46:09.143Z] 5046.40 IOPS, 19.71 MiB/s [2024-12-16T11:46:10.080Z] 5092.33 IOPS, 19.89 MiB/s [2024-12-16T11:46:11.458Z] 5119.29 IOPS, 20.00 MiB/s [2024-12-16T11:46:12.394Z] 5039.00 IOPS, 19.68 MiB/s [2024-12-16T11:46:13.330Z] 5059.00 IOPS, 19.76 MiB/s [2024-12-16T11:46:13.330Z] 5108.40 IOPS, 19.95 MiB/s 00:24:47.263 Latency(us) 00:24:47.263 [2024-12-16T11:46:13.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.263 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.263 Verification LBA range: start 0x0 length 0x2000 00:24:47.263 TLSTESTn1 : 10.03 5106.90 19.95 0.00 0.00 25018.23 5336.50 44689.31 00:24:47.263 [2024-12-16T11:46:13.330Z] =================================================================================================================== 00:24:47.263 [2024-12-16T11:46:13.330Z] Total : 5106.90 19.95 0.00 0.00 25018.23 5336.50 44689.31 00:24:47.263 { 00:24:47.263 "results": [ 00:24:47.263 { 00:24:47.263 "job": "TLSTESTn1", 00:24:47.263 "core_mask": "0x4", 00:24:47.263 "workload": "verify", 00:24:47.263 "status": "finished", 00:24:47.263 "verify_range": { 00:24:47.263 "start": 0, 00:24:47.263 "length": 8192 00:24:47.263 }, 00:24:47.263 "queue_depth": 128, 00:24:47.263 "io_size": 4096, 00:24:47.263 "runtime": 10.027608, 00:24:47.263 "iops": 5106.900868083395, 00:24:47.263 "mibps": 19.94883151595076, 00:24:47.263 "io_failed": 0, 00:24:47.263 "io_timeout": 0, 00:24:47.263 "avg_latency_us": 25018.230731572145, 00:24:47.263 "min_latency_us": 5336.5028571428575, 00:24:47.263 "max_latency_us": 44689.310476190476 00:24:47.263 } 00:24:47.263 ], 00:24:47.263 "core_count": 1 00:24:47.263 } 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 403087 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 403087 ']' 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 403087 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403087 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403087' 00:24:47.263 killing process with pid 403087 00:24:47.263 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 403087 00:24:47.263 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.263 00:24:47.263 Latency(us) 00:24:47.263 [2024-12-16T11:46:13.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.264 [2024-12-16T11:46:13.331Z] =================================================================================================================== 00:24:47.264 [2024-12-16T11:46:13.331Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.264 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 403087 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 403042 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 403042 ']' 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 403042 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403042 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403042' 00:24:47.523 killing process with pid 403042 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 403042 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 403042 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=404876 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 404876 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 404876 ']' 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.523 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.782 [2024-12-16 12:46:13.629859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:47.782 [2024-12-16 12:46:13.629905] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.782 [2024-12-16 12:46:13.702840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.782 [2024-12-16 12:46:13.741324] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.782 [2024-12-16 12:46:13.741363] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.782 [2024-12-16 12:46:13.741380] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.782 [2024-12-16 12:46:13.741386] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.782 [2024-12-16 12:46:13.741407] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.782 [2024-12-16 12:46:13.741425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.782 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.782 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:47.782 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:47.782 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.782 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.041 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.041 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.K7nSVjnHlu 00:24:48.041 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.K7nSVjnHlu 00:24:48.041 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:48.041 [2024-12-16 12:46:14.034660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.041 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:48.300 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:48.559 [2024-12-16 12:46:14.407619] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.559 [2024-12-16 12:46:14.407854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.559 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:48.559 malloc0 00:24:48.559 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:48.818 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:49.077 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=405215 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 405215 /var/tmp/bdevperf.sock 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 405215 ']' 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.335 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 [2024-12-16 12:46:15.203439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:49.336 [2024-12-16 12:46:15.203489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405215 ] 00:24:49.336 [2024-12-16 12:46:15.270478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.336 [2024-12-16 12:46:15.309630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.336 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.336 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:49.336 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:49.594 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:49.853 [2024-12-16 12:46:15.750591] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.853 nvme0n1 00:24:49.853 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.112 Running I/O for 1 seconds... 00:24:51.049 4661.00 IOPS, 18.21 MiB/s 00:24:51.050 Latency(us) 00:24:51.050 [2024-12-16T11:46:17.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.050 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:51.050 Verification LBA range: start 0x0 length 0x2000 00:24:51.050 nvme0n1 : 1.02 4715.21 18.42 0.00 0.00 26953.76 6085.49 35951.18 00:24:51.050 [2024-12-16T11:46:17.117Z] =================================================================================================================== 00:24:51.050 [2024-12-16T11:46:17.117Z] Total : 4715.21 18.42 0.00 0.00 26953.76 6085.49 35951.18 00:24:51.050 { 00:24:51.050 "results": [ 00:24:51.050 { 00:24:51.050 "job": "nvme0n1", 00:24:51.050 "core_mask": "0x2", 00:24:51.050 "workload": "verify", 00:24:51.050 "status": "finished", 00:24:51.050 "verify_range": { 00:24:51.050 "start": 0, 00:24:51.050 "length": 8192 00:24:51.050 }, 00:24:51.050 "queue_depth": 128, 00:24:51.050 "io_size": 4096, 00:24:51.050 "runtime": 1.015649, 00:24:51.050 "iops": 4715.211652844634, 00:24:51.050 "mibps": 18.41879551892435, 00:24:51.050 "io_failed": 0, 00:24:51.050 "io_timeout": 0, 00:24:51.050 "avg_latency_us": 26953.764472153445, 00:24:51.050 "min_latency_us": 6085.4857142857145, 00:24:51.050 "max_latency_us": 35951.177142857145 00:24:51.050 } 00:24:51.050 ], 00:24:51.050 "core_count": 1 00:24:51.050 } 00:24:51.050 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 405215 00:24:51.050 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 405215 ']' 00:24:51.050 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 405215 00:24:51.050 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.050 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.050 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405215 00:24:51.050 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:51.050 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:51.050 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405215' 00:24:51.050 killing process with pid 405215 00:24:51.050 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 405215 00:24:51.050 Received shutdown signal, test time was about 1.000000 seconds 00:24:51.050 00:24:51.050 Latency(us) 00:24:51.050 [2024-12-16T11:46:17.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.050 [2024-12-16T11:46:17.117Z] =================================================================================================================== 00:24:51.050 [2024-12-16T11:46:17.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.050 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 405215 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 404876 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 404876 ']' 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 404876 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404876 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404876' 00:24:51.309 killing process with pid 404876 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 404876 00:24:51.309 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 404876 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=405575 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 405575 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 405575 ']' 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.568 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.568 [2024-12-16 12:46:17.490297] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:51.568 [2024-12-16 12:46:17.490339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.568 [2024-12-16 12:46:17.543617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.568 [2024-12-16 12:46:17.582364] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.568 [2024-12-16 12:46:17.582399] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.568 [2024-12-16 12:46:17.582405] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.568 [2024-12-16 12:46:17.582411] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.568 [2024-12-16 12:46:17.582416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.568 [2024-12-16 12:46:17.582452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.827 [2024-12-16 12:46:17.710933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.827 malloc0 00:24:51.827 [2024-12-16 12:46:17.758016] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.827 [2024-12-16 12:46:17.758299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=405595 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 405595 /var/tmp/bdevperf.sock 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 405595 ']' 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.827 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.827 [2024-12-16 12:46:17.831994] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:51.828 [2024-12-16 12:46:17.832036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405595 ] 00:24:52.087 [2024-12-16 12:46:17.900315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.087 [2024-12-16 12:46:17.939532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.087 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.087 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:52.087 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7nSVjnHlu 00:24:52.346 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:52.346 [2024-12-16 12:46:18.377328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.605 nvme0n1 00:24:52.605 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.605 Running I/O for 1 seconds... 00:24:53.542 4927.00 IOPS, 19.25 MiB/s 00:24:53.542 Latency(us) 00:24:53.542 [2024-12-16T11:46:19.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.542 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:53.542 Verification LBA range: start 0x0 length 0x2000 00:24:53.542 nvme0n1 : 1.03 4893.53 19.12 0.00 0.00 25801.36 6928.09 56173.71 00:24:53.542 [2024-12-16T11:46:19.609Z] =================================================================================================================== 00:24:53.542 [2024-12-16T11:46:19.609Z] Total : 4893.53 19.12 0.00 0.00 25801.36 6928.09 56173.71 00:24:53.542 { 00:24:53.542 "results": [ 00:24:53.542 { 00:24:53.542 "job": "nvme0n1", 00:24:53.542 "core_mask": "0x2", 00:24:53.542 "workload": "verify", 00:24:53.542 "status": "finished", 00:24:53.542 "verify_range": { 00:24:53.542 "start": 0, 00:24:53.542 "length": 8192 00:24:53.542 }, 00:24:53.542 "queue_depth": 128, 00:24:53.542 "io_size": 4096, 00:24:53.542 "runtime": 1.032996, 00:24:53.542 "iops": 4893.532985606914, 00:24:53.542 "mibps": 19.115363225027007, 00:24:53.542 "io_failed": 0, 00:24:53.542 "io_timeout": 0, 00:24:53.542 "avg_latency_us": 25801.36361207668, 00:24:53.542 "min_latency_us": 6928.091428571429, 00:24:53.542 "max_latency_us": 56173.71428571428 00:24:53.542 } 00:24:53.542 ], 00:24:53.542 "core_count": 1 00:24:53.542 } 00:24:53.801 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:53.801 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.801 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.801 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.801 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:53.801 "subsystems": [ 00:24:53.801 { 00:24:53.801 "subsystem": "keyring", 00:24:53.801 "config": [ 00:24:53.801 { 00:24:53.801 "method": "keyring_file_add_key", 00:24:53.801 "params": { 00:24:53.801 "name": "key0", 00:24:53.801 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:53.801 } 00:24:53.801 } 00:24:53.801 ] 00:24:53.801 }, 00:24:53.801 { 00:24:53.801 "subsystem": "iobuf", 00:24:53.801 "config": [ 00:24:53.801 { 00:24:53.801 "method": "iobuf_set_options", 00:24:53.801 "params": { 00:24:53.801 "small_pool_count": 8192, 00:24:53.801 "large_pool_count": 1024, 00:24:53.801 "small_bufsize": 8192, 00:24:53.801 "large_bufsize": 135168 00:24:53.801 } 00:24:53.801 } 00:24:53.801 ] 00:24:53.801 }, 00:24:53.801 { 00:24:53.801 "subsystem": "sock", 00:24:53.801 "config": [ 00:24:53.801 { 00:24:53.801 "method": "sock_set_default_impl", 00:24:53.801 "params": { 00:24:53.801 "impl_name": "posix" 00:24:53.801 } 00:24:53.801 }, 00:24:53.801 { 00:24:53.801 "method": "sock_impl_set_options", 00:24:53.801 "params": { 00:24:53.801 "impl_name": "ssl", 00:24:53.801 "recv_buf_size": 4096, 00:24:53.801 "send_buf_size": 4096, 00:24:53.801 "enable_recv_pipe": true, 00:24:53.801 "enable_quickack": false, 00:24:53.801 "enable_placement_id": 0, 00:24:53.801 "enable_zerocopy_send_server": true, 00:24:53.801 "enable_zerocopy_send_client": false, 00:24:53.801 "zerocopy_threshold": 0, 00:24:53.801 "tls_version": 0, 00:24:53.801 "enable_ktls": false 00:24:53.801 } 00:24:53.801 }, 00:24:53.801 { 00:24:53.801 "method": "sock_impl_set_options", 00:24:53.801 "params": { 00:24:53.801 "impl_name": "posix", 00:24:53.801 "recv_buf_size": 2097152, 00:24:53.801 "send_buf_size": 2097152, 00:24:53.801 "enable_recv_pipe": true, 00:24:53.801 "enable_quickack": false, 00:24:53.801 "enable_placement_id": 0, 00:24:53.801 "enable_zerocopy_send_server": true, 00:24:53.801 "enable_zerocopy_send_client": false, 00:24:53.801 "zerocopy_threshold": 0, 00:24:53.801 "tls_version": 0, 00:24:53.801 "enable_ktls": false 00:24:53.801 } 00:24:53.801 } 00:24:53.802 ] 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "subsystem": "vmd", 00:24:53.802 "config": [] 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "subsystem": "accel", 00:24:53.802 "config": [ 00:24:53.802 { 00:24:53.802 "method": "accel_set_options", 00:24:53.802 "params": { 00:24:53.802 "small_cache_size": 128, 00:24:53.802 "large_cache_size": 16, 00:24:53.802 "task_count": 2048, 00:24:53.802 "sequence_count": 2048, 00:24:53.802 "buf_count": 2048 00:24:53.802 } 00:24:53.802 } 00:24:53.802 ] 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "subsystem": "bdev", 00:24:53.802 "config": [ 00:24:53.802 { 00:24:53.802 "method": "bdev_set_options", 00:24:53.802 "params": { 00:24:53.802 "bdev_io_pool_size": 65535, 00:24:53.802 "bdev_io_cache_size": 256, 00:24:53.802 "bdev_auto_examine": true, 00:24:53.802 "iobuf_small_cache_size": 128, 00:24:53.802 "iobuf_large_cache_size": 16 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "bdev_raid_set_options", 00:24:53.802 "params": { 00:24:53.802 "process_window_size_kb": 1024, 00:24:53.802 "process_max_bandwidth_mb_sec": 0 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "bdev_iscsi_set_options", 00:24:53.802 "params": { 00:24:53.802 "timeout_sec": 30 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "bdev_nvme_set_options", 00:24:53.802 "params": { 00:24:53.802 "action_on_timeout": "none", 00:24:53.802 "timeout_us": 0, 00:24:53.802 "timeout_admin_us": 0, 00:24:53.802 "keep_alive_timeout_ms": 10000, 00:24:53.802 "arbitration_burst": 0, 00:24:53.802 "low_priority_weight": 0, 00:24:53.802 "medium_priority_weight": 0, 00:24:53.802 "high_priority_weight": 0, 00:24:53.802 "nvme_adminq_poll_period_us": 10000, 00:24:53.802 "nvme_ioq_poll_period_us": 0, 00:24:53.802 "io_queue_requests": 0, 00:24:53.802 "delay_cmd_submit": true, 00:24:53.802 "transport_retry_count": 4, 00:24:53.802 "bdev_retry_count": 3, 00:24:53.802 "transport_ack_timeout": 0, 00:24:53.802 "ctrlr_loss_timeout_sec": 0, 00:24:53.802 "reconnect_delay_sec": 0, 00:24:53.802 "fast_io_fail_timeout_sec": 0, 00:24:53.802 "disable_auto_failback": false, 00:24:53.802 "generate_uuids": false, 00:24:53.802 "transport_tos": 0, 00:24:53.802 "nvme_error_stat": false, 00:24:53.802 "rdma_srq_size": 0, 00:24:53.802 "io_path_stat": false, 00:24:53.802 "allow_accel_sequence": false, 00:24:53.802 "rdma_max_cq_size": 0, 00:24:53.802 "rdma_cm_event_timeout_ms": 0, 00:24:53.802 "dhchap_digests": [ 00:24:53.802 "sha256", 00:24:53.802 "sha384", 00:24:53.802 "sha512" 00:24:53.802 ], 00:24:53.802 "dhchap_dhgroups": [ 00:24:53.802 "null", 00:24:53.802 "ffdhe2048", 00:24:53.802 "ffdhe3072", 00:24:53.802 "ffdhe4096", 00:24:53.802 "ffdhe6144", 00:24:53.802 "ffdhe8192" 00:24:53.802 ] 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "bdev_nvme_set_hotplug", 00:24:53.802 "params": { 00:24:53.802 "period_us": 100000, 00:24:53.802 "enable": false 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "bdev_malloc_create", 00:24:53.802 "params": { 00:24:53.802 "name": "malloc0", 00:24:53.802 "num_blocks": 8192, 00:24:53.802 "block_size": 4096, 00:24:53.802 "physical_block_size": 4096, 00:24:53.802 "uuid": "0dc47749-da71-4b60-bf94-bc96e886b004", 00:24:53.802 "optimal_io_boundary": 0, 00:24:53.802 "md_size": 0, 00:24:53.802 "dif_type": 0, 00:24:53.802 "dif_is_head_of_md": false, 00:24:53.802 "dif_pi_format": 0 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "bdev_wait_for_examine" 00:24:53.802 } 00:24:53.802 ] 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "subsystem": "nbd", 00:24:53.802 "config": [] 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "subsystem": "scheduler", 00:24:53.802 "config": [ 00:24:53.802 { 00:24:53.802 "method": "framework_set_scheduler", 00:24:53.802 "params": { 00:24:53.802 "name": "static" 00:24:53.802 } 00:24:53.802 } 00:24:53.802 ] 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "subsystem": "nvmf", 00:24:53.802 "config": [ 00:24:53.802 { 00:24:53.802 "method": "nvmf_set_config", 00:24:53.802 "params": { 00:24:53.802 "discovery_filter": "match_any", 00:24:53.802 "admin_cmd_passthru": { 00:24:53.802 "identify_ctrlr": false 00:24:53.802 }, 00:24:53.802 "dhchap_digests": [ 00:24:53.802 "sha256", 00:24:53.802 "sha384", 00:24:53.802 "sha512" 00:24:53.802 ], 00:24:53.802 "dhchap_dhgroups": [ 00:24:53.802 "null", 00:24:53.802 "ffdhe2048", 00:24:53.802 "ffdhe3072", 00:24:53.802 "ffdhe4096", 00:24:53.802 "ffdhe6144", 00:24:53.802 "ffdhe8192" 00:24:53.802 ] 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_set_max_subsystems", 00:24:53.802 "params": { 00:24:53.802 "max_subsystems": 1024 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_set_crdt", 00:24:53.802 "params": { 00:24:53.802 "crdt1": 0, 00:24:53.802 "crdt2": 0, 00:24:53.802 "crdt3": 0 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_create_transport", 00:24:53.802 "params": { 00:24:53.802 "trtype": "TCP", 00:24:53.802 "max_queue_depth": 128, 00:24:53.802 "max_io_qpairs_per_ctrlr": 127, 00:24:53.802 "in_capsule_data_size": 4096, 00:24:53.802 "max_io_size": 131072, 00:24:53.802 "io_unit_size": 131072, 00:24:53.802 "max_aq_depth": 128, 00:24:53.802 "num_shared_buffers": 511, 00:24:53.802 "buf_cache_size": 4294967295, 00:24:53.802 "dif_insert_or_strip": false, 00:24:53.802 "zcopy": false, 00:24:53.802 "c2h_success": false, 00:24:53.802 "sock_priority": 0, 00:24:53.802 "abort_timeout_sec": 1, 00:24:53.802 "ack_timeout": 0, 00:24:53.802 "data_wr_pool_size": 0 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_create_subsystem", 00:24:53.802 "params": { 00:24:53.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.802 "allow_any_host": false, 00:24:53.802 "serial_number": "00000000000000000000", 00:24:53.802 "model_number": "SPDK bdev Controller", 00:24:53.802 "max_namespaces": 32, 00:24:53.802 "min_cntlid": 1, 00:24:53.802 "max_cntlid": 65519, 00:24:53.802 "ana_reporting": false 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_subsystem_add_host", 00:24:53.802 "params": { 00:24:53.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.802 "host": "nqn.2016-06.io.spdk:host1", 00:24:53.802 "psk": "key0" 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_subsystem_add_ns", 00:24:53.802 "params": { 00:24:53.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.802 "namespace": { 00:24:53.802 "nsid": 1, 00:24:53.802 "bdev_name": "malloc0", 00:24:53.802 "nguid": "0DC47749DA714B60BF94BC96E886B004", 00:24:53.802 "uuid": "0dc47749-da71-4b60-bf94-bc96e886b004", 00:24:53.802 "no_auto_visible": false 00:24:53.802 } 00:24:53.802 } 00:24:53.802 }, 00:24:53.802 { 00:24:53.802 "method": "nvmf_subsystem_add_listener", 00:24:53.802 "params": { 00:24:53.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.802 "listen_address": { 00:24:53.802 "trtype": "TCP", 00:24:53.802 "adrfam": "IPv4", 00:24:53.802 "traddr": "10.0.0.2", 00:24:53.802 "trsvcid": "4420" 00:24:53.802 }, 00:24:53.802 "secure_channel": false, 00:24:53.802 "sock_impl": "ssl" 00:24:53.802 } 00:24:53.802 } 00:24:53.802 ] 00:24:53.802 } 00:24:53.802 ] 00:24:53.802 }' 00:24:53.802 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:54.062 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:54.062 "subsystems": [ 00:24:54.062 { 00:24:54.062 "subsystem": "keyring", 00:24:54.062 "config": [ 00:24:54.062 { 00:24:54.062 "method": "keyring_file_add_key", 00:24:54.062 "params": { 00:24:54.062 "name": "key0", 00:24:54.062 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:54.062 } 00:24:54.062 } 00:24:54.062 ] 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "subsystem": "iobuf", 00:24:54.062 "config": [ 00:24:54.062 { 00:24:54.062 "method": "iobuf_set_options", 00:24:54.062 "params": { 00:24:54.062 "small_pool_count": 8192, 00:24:54.062 "large_pool_count": 1024, 00:24:54.062 "small_bufsize": 8192, 00:24:54.062 "large_bufsize": 135168 00:24:54.062 } 00:24:54.062 } 00:24:54.062 ] 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "subsystem": "sock", 00:24:54.062 "config": [ 00:24:54.062 { 00:24:54.062 "method": "sock_set_default_impl", 00:24:54.062 "params": { 00:24:54.062 "impl_name": "posix" 00:24:54.062 } 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "method": "sock_impl_set_options", 00:24:54.062 "params": { 00:24:54.062 "impl_name": "ssl", 00:24:54.062 "recv_buf_size": 4096, 00:24:54.062 "send_buf_size": 4096, 00:24:54.062 "enable_recv_pipe": true, 00:24:54.062 "enable_quickack": false, 00:24:54.062 "enable_placement_id": 0, 00:24:54.062 "enable_zerocopy_send_server": true, 00:24:54.062 "enable_zerocopy_send_client": false, 00:24:54.062 "zerocopy_threshold": 0, 00:24:54.062 "tls_version": 0, 00:24:54.062 "enable_ktls": false 00:24:54.062 } 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "method": "sock_impl_set_options", 00:24:54.062 "params": { 00:24:54.062 "impl_name": "posix", 00:24:54.062 "recv_buf_size": 2097152, 00:24:54.062 "send_buf_size": 2097152, 00:24:54.062 "enable_recv_pipe": true, 00:24:54.062 "enable_quickack": false, 00:24:54.062 "enable_placement_id": 0, 00:24:54.062 "enable_zerocopy_send_server": true, 00:24:54.062 "enable_zerocopy_send_client": false, 00:24:54.062 "zerocopy_threshold": 0, 00:24:54.062 "tls_version": 0, 00:24:54.062 "enable_ktls": false 00:24:54.062 } 00:24:54.062 } 00:24:54.062 ] 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "subsystem": "vmd", 00:24:54.062 "config": [] 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "subsystem": "accel", 00:24:54.062 "config": [ 00:24:54.062 { 00:24:54.062 "method": "accel_set_options", 00:24:54.062 "params": { 00:24:54.062 "small_cache_size": 128, 00:24:54.062 "large_cache_size": 16, 00:24:54.062 "task_count": 2048, 00:24:54.062 "sequence_count": 2048, 00:24:54.062 "buf_count": 2048 00:24:54.062 } 00:24:54.062 } 00:24:54.062 ] 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "subsystem": "bdev", 00:24:54.062 "config": [ 00:24:54.062 { 00:24:54.062 "method": "bdev_set_options", 00:24:54.062 "params": { 00:24:54.062 "bdev_io_pool_size": 65535, 00:24:54.062 "bdev_io_cache_size": 256, 00:24:54.062 "bdev_auto_examine": true, 00:24:54.062 "iobuf_small_cache_size": 128, 00:24:54.062 "iobuf_large_cache_size": 16 00:24:54.062 } 00:24:54.062 }, 00:24:54.062 { 00:24:54.062 "method": "bdev_raid_set_options", 00:24:54.062 "params": { 00:24:54.062 "process_window_size_kb": 1024, 00:24:54.062 "process_max_bandwidth_mb_sec": 0 00:24:54.062 } 00:24:54.062 }, 00:24:54.062 { 00:24:54.063 "method": "bdev_iscsi_set_options", 00:24:54.063 "params": { 00:24:54.063 "timeout_sec": 30 00:24:54.063 } 00:24:54.063 }, 00:24:54.063 { 00:24:54.063 "method": "bdev_nvme_set_options", 00:24:54.063 "params": { 00:24:54.063 "action_on_timeout": "none", 00:24:54.063 "timeout_us": 0, 00:24:54.063 "timeout_admin_us": 0, 00:24:54.063 "keep_alive_timeout_ms": 10000, 00:24:54.063 "arbitration_burst": 0, 00:24:54.063 "low_priority_weight": 0, 00:24:54.063 "medium_priority_weight": 0, 00:24:54.063 "high_priority_weight": 0, 00:24:54.063 "nvme_adminq_poll_period_us": 10000, 00:24:54.063 "nvme_ioq_poll_period_us": 0, 00:24:54.063 "io_queue_requests": 512, 00:24:54.063 "delay_cmd_submit": true, 00:24:54.063 "transport_retry_count": 4, 00:24:54.063 "bdev_retry_count": 3, 00:24:54.063 "transport_ack_timeout": 0, 00:24:54.063 "ctrlr_loss_timeout_sec": 0, 00:24:54.063 "reconnect_delay_sec": 0, 00:24:54.063 "fast_io_fail_timeout_sec": 0, 00:24:54.063 "disable_auto_failback": false, 00:24:54.063 "generate_uuids": false, 00:24:54.063 "transport_tos": 0, 00:24:54.063 "nvme_error_stat": false, 00:24:54.063 "rdma_srq_size": 0, 00:24:54.063 "io_path_stat": false, 00:24:54.063 "allow_accel_sequence": false, 00:24:54.063 "rdma_max_cq_size": 0, 00:24:54.063 "rdma_cm_event_timeout_ms": 0, 00:24:54.063 "dhchap_digests": [ 00:24:54.063 "sha256", 00:24:54.063 "sha384", 00:24:54.063 "sha512" 00:24:54.063 ], 00:24:54.063 "dhchap_dhgroups": [ 00:24:54.063 "null", 00:24:54.063 "ffdhe2048", 00:24:54.063 "ffdhe3072", 00:24:54.063 "ffdhe4096", 00:24:54.063 "ffdhe6144", 00:24:54.063 "ffdhe8192" 00:24:54.063 ] 00:24:54.063 } 00:24:54.063 }, 00:24:54.063 { 00:24:54.063 "method": "bdev_nvme_attach_controller", 00:24:54.063 "params": { 00:24:54.063 "name": "nvme0", 00:24:54.063 "trtype": "TCP", 00:24:54.063 "adrfam": "IPv4", 00:24:54.063 "traddr": "10.0.0.2", 00:24:54.063 "trsvcid": "4420", 00:24:54.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.063 "prchk_reftag": false, 00:24:54.063 "prchk_guard": false, 00:24:54.063 "ctrlr_loss_timeout_sec": 0, 00:24:54.063 "reconnect_delay_sec": 0, 00:24:54.063 "fast_io_fail_timeout_sec": 0, 00:24:54.063 "psk": "key0", 00:24:54.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.063 "hdgst": false, 00:24:54.063 "ddgst": false 00:24:54.063 } 00:24:54.063 }, 00:24:54.063 { 00:24:54.063 "method": "bdev_nvme_set_hotplug", 00:24:54.063 "params": { 00:24:54.063 "period_us": 100000, 00:24:54.063 "enable": false 00:24:54.063 } 00:24:54.063 }, 00:24:54.063 { 00:24:54.063 "method": "bdev_enable_histogram", 00:24:54.063 "params": { 00:24:54.063 "name": "nvme0n1", 00:24:54.063 "enable": true 00:24:54.063 } 00:24:54.063 }, 00:24:54.063 { 00:24:54.063 "method": "bdev_wait_for_examine" 00:24:54.063 } 00:24:54.063 ] 00:24:54.063 }, 00:24:54.063 { 00:24:54.063 "subsystem": "nbd", 00:24:54.063 "config": [] 00:24:54.063 } 00:24:54.063 ] 00:24:54.063 }' 00:24:54.063 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 405595 00:24:54.063 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 405595 ']' 00:24:54.063 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 405595 00:24:54.063 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:54.063 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.063 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405595 00:24:54.063 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:54.063 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:54.063 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405595' 00:24:54.063 killing process with pid 405595 00:24:54.063 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 405595 00:24:54.063 Received shutdown signal, test time was about 1.000000 seconds 00:24:54.063 00:24:54.063 Latency(us) 00:24:54.063 [2024-12-16T11:46:20.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.063 [2024-12-16T11:46:20.130Z] =================================================================================================================== 00:24:54.063 [2024-12-16T11:46:20.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.063 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 405595 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 405575 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 405575 ']' 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 405575 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405575 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405575' 00:24:54.322 killing process with pid 405575 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 405575 00:24:54.322 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 405575 00:24:54.582 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:54.582 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:54.582 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:54.582 "subsystems": [ 00:24:54.582 { 00:24:54.582 "subsystem": "keyring", 00:24:54.582 "config": [ 00:24:54.582 { 00:24:54.582 "method": "keyring_file_add_key", 00:24:54.582 "params": { 00:24:54.582 "name": "key0", 00:24:54.582 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:54.582 } 00:24:54.582 } 00:24:54.582 ] 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "subsystem": "iobuf", 00:24:54.582 "config": [ 00:24:54.582 { 00:24:54.582 "method": "iobuf_set_options", 00:24:54.582 "params": { 00:24:54.582 "small_pool_count": 8192, 00:24:54.582 "large_pool_count": 1024, 00:24:54.582 "small_bufsize": 8192, 00:24:54.582 "large_bufsize": 135168 00:24:54.582 } 00:24:54.582 } 00:24:54.582 ] 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "subsystem": "sock", 00:24:54.582 "config": [ 00:24:54.582 { 00:24:54.582 "method": "sock_set_default_impl", 00:24:54.582 "params": { 00:24:54.582 "impl_name": "posix" 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "sock_impl_set_options", 00:24:54.582 "params": { 00:24:54.582 "impl_name": "ssl", 00:24:54.582 "recv_buf_size": 4096, 00:24:54.582 "send_buf_size": 4096, 00:24:54.582 "enable_recv_pipe": true, 00:24:54.582 "enable_quickack": false, 00:24:54.582 "enable_placement_id": 0, 00:24:54.582 "enable_zerocopy_send_server": true, 00:24:54.582 "enable_zerocopy_send_client": false, 00:24:54.582 "zerocopy_threshold": 0, 00:24:54.582 "tls_version": 0, 00:24:54.582 "enable_ktls": false 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "sock_impl_set_options", 00:24:54.582 "params": { 00:24:54.582 "impl_name": "posix", 00:24:54.582 "recv_buf_size": 2097152, 00:24:54.582 "send_buf_size": 2097152, 00:24:54.582 "enable_recv_pipe": true, 00:24:54.582 "enable_quickack": false, 00:24:54.582 "enable_placement_id": 0, 00:24:54.582 "enable_zerocopy_send_server": true, 00:24:54.582 "enable_zerocopy_send_client": false, 00:24:54.582 "zerocopy_threshold": 0, 00:24:54.582 "tls_version": 0, 00:24:54.582 "enable_ktls": false 00:24:54.582 } 00:24:54.582 } 00:24:54.582 ] 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "subsystem": "vmd", 00:24:54.582 "config": [] 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "subsystem": "accel", 00:24:54.582 "config": [ 00:24:54.582 { 00:24:54.582 "method": "accel_set_options", 00:24:54.582 "params": { 00:24:54.582 "small_cache_size": 128, 00:24:54.582 "large_cache_size": 16, 00:24:54.582 "task_count": 2048, 00:24:54.582 "sequence_count": 2048, 00:24:54.582 "buf_count": 2048 00:24:54.582 } 00:24:54.582 } 00:24:54.582 ] 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "subsystem": "bdev", 00:24:54.582 "config": [ 00:24:54.582 { 00:24:54.582 "method": "bdev_set_options", 00:24:54.582 "params": { 00:24:54.582 "bdev_io_pool_size": 65535, 00:24:54.582 "bdev_io_cache_size": 256, 00:24:54.582 "bdev_auto_examine": true, 00:24:54.582 "iobuf_small_cache_size": 128, 00:24:54.582 "iobuf_large_cache_size": 16 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "bdev_raid_set_options", 00:24:54.582 "params": { 00:24:54.582 "process_window_size_kb": 1024, 00:24:54.582 "process_max_bandwidth_mb_sec": 0 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "bdev_iscsi_set_options", 00:24:54.582 "params": { 00:24:54.582 "timeout_sec": 30 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "bdev_nvme_set_options", 00:24:54.582 "params": { 00:24:54.582 "action_on_timeout": "none", 00:24:54.582 "timeout_us": 0, 00:24:54.582 "timeout_admin_us": 0, 00:24:54.582 "keep_alive_timeout_ms": 10000, 00:24:54.582 "arbitration_burst": 0, 00:24:54.582 "low_priority_weight": 0, 00:24:54.582 "medium_priority_weight": 0, 00:24:54.582 "high_priority_weight": 0, 00:24:54.582 "nvme_adminq_poll_period_us": 10000, 00:24:54.582 "nvme_ioq_poll_period_us": 0, 00:24:54.582 "io_queue_requests": 0, 00:24:54.582 "delay_cmd_submit": true, 00:24:54.582 "transport_retry_count": 4, 00:24:54.582 "bdev_retry_count": 3, 00:24:54.582 "transport_ack_timeout": 0, 00:24:54.582 "ctrlr_loss_timeout_sec": 0, 00:24:54.582 "reconnect_delay_sec": 0, 00:24:54.582 "fast_io_fail_timeout_sec": 0, 00:24:54.582 "disable_auto_failback": false, 00:24:54.582 "generate_uuids": false, 00:24:54.582 "transport_tos": 0, 00:24:54.582 "nvme_error_stat": false, 00:24:54.582 "rdma_srq_size": 0, 00:24:54.582 "io_path_stat": false, 00:24:54.582 "allow_accel_sequence": false, 00:24:54.582 "rdma_max_cq_size": 0, 00:24:54.582 "rdma_cm_event_timeout_ms": 0, 00:24:54.582 "dhchap_digests": [ 00:24:54.582 "sha256", 00:24:54.582 "sha384", 00:24:54.582 "sha512" 00:24:54.582 ], 00:24:54.582 "dhchap_dhgroups": [ 00:24:54.582 "null", 00:24:54.582 "ffdhe2048", 00:24:54.582 "ffdhe3072", 00:24:54.582 "ffdhe4096", 00:24:54.582 "ffdhe6144", 00:24:54.582 "ffdhe8192" 00:24:54.582 ] 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "bdev_nvme_set_hotplug", 00:24:54.582 "params": { 00:24:54.582 "period_us": 100000, 00:24:54.582 "enable": false 00:24:54.582 } 00:24:54.582 }, 00:24:54.582 { 00:24:54.582 "method": "bdev_malloc_create", 00:24:54.582 "params": { 00:24:54.582 "name": "malloc0", 00:24:54.582 "num_blocks": 8192, 00:24:54.582 "block_size": 4096, 00:24:54.582 "physical_block_size": 4096, 00:24:54.582 "uuid": "0dc47749-da71-4b60-bf94-bc96e886b004", 00:24:54.582 "optimal_io_boundary": 0, 00:24:54.582 "md_size": 0, 00:24:54.582 "dif_type": 0, 00:24:54.582 "dif_is_head_of_md": false, 00:24:54.582 "dif_pi_format": 0 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "bdev_wait_for_examine" 00:24:54.583 } 00:24:54.583 ] 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "subsystem": "nbd", 00:24:54.583 "config": [] 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "subsystem": "scheduler", 00:24:54.583 "config": [ 00:24:54.583 { 00:24:54.583 "method": "framework_set_scheduler", 00:24:54.583 "params": { 00:24:54.583 "name": "static" 00:24:54.583 } 00:24:54.583 } 00:24:54.583 ] 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "subsystem": "nvmf", 00:24:54.583 "config": [ 00:24:54.583 { 00:24:54.583 "method": "nvmf_set_config", 00:24:54.583 "params": { 00:24:54.583 "discovery_filter": "match_any", 00:24:54.583 "admin_cmd_passthru": { 00:24:54.583 "identify_ctrlr": false 00:24:54.583 }, 00:24:54.583 "dhchap_digests": [ 00:24:54.583 "sha256", 00:24:54.583 "sha384", 00:24:54.583 "sha512" 00:24:54.583 ], 00:24:54.583 "dhchap_dhgroups": [ 00:24:54.583 "null", 00:24:54.583 "ffdhe2048", 00:24:54.583 "ffdhe3072", 00:24:54.583 "ffdhe4096", 00:24:54.583 "ffdhe6144", 00:24:54.583 "ffdhe8192" 00:24:54.583 ] 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_set_max_subsystems", 00:24:54.583 "params": { 00:24:54.583 "max_subsystems": 1024 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_set_crdt", 00:24:54.583 "params": { 00:24:54.583 "crdt1": 0, 00:24:54.583 "crdt2": 0, 00:24:54.583 "crdt3": 0 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_create_transport", 00:24:54.583 "params": { 00:24:54.583 "trtype": "TCP", 00:24:54.583 "max_queue_depth": 128, 00:24:54.583 "max_io_qpairs_per_ctrlr": 127, 00:24:54.583 "in_capsule_data_size": 4096, 00:24:54.583 "max_io_size": 131072, 00:24:54.583 "io_unit_size": 131072, 00:24:54.583 "max_aq_depth": 128, 00:24:54.583 "num_shared_buffers": 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:54.583 511, 00:24:54.583 "buf_cache_size": 4294967295, 00:24:54.583 "dif_insert_or_strip": false, 00:24:54.583 "zcopy": false, 00:24:54.583 "c2h_success": false, 00:24:54.583 "sock_priority": 0, 00:24:54.583 "abort_timeout_sec": 1, 00:24:54.583 "ack_timeout": 0, 00:24:54.583 "data_wr_pool_size": 0 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_create_subsystem", 00:24:54.583 "params": { 00:24:54.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.583 "allow_any_host": false, 00:24:54.583 "serial_number": "00000000000000000000", 00:24:54.583 "model_number": "SPDK bdev Controller", 00:24:54.583 "max_namespaces": 32, 00:24:54.583 "min_cntlid": 1, 00:24:54.583 "max_cntlid": 65519, 00:24:54.583 "ana_reporting": false 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_subsystem_add_host", 00:24:54.583 "params": { 00:24:54.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.583 "host": "nqn.2016-06.io.spdk:host1", 00:24:54.583 "psk": "key0" 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_subsystem_add_ns", 00:24:54.583 "params": { 00:24:54.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.583 "namespace": { 00:24:54.583 "nsid": 1, 00:24:54.583 "bdev_name": "malloc0", 00:24:54.583 "nguid": "0DC47749DA714B60BF94BC96E886B004", 00:24:54.583 "uuid": "0dc47749-da71-4b60-bf94-bc96e886b004", 00:24:54.583 "no_auto_visible": false 00:24:54.583 } 00:24:54.583 } 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "method": "nvmf_subsystem_add_listener", 00:24:54.583 "params": { 00:24:54.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.583 "listen_address": { 00:24:54.583 "trtype": "TCP", 00:24:54.583 "adrfam": "IPv4", 00:24:54.583 "traddr": "10.0.0.2", 00:24:54.583 "trsvcid": "4420" 00:24:54.583 }, 00:24:54.583 "secure_channel": false, 00:24:54.583 "sock_impl": "ssl" 00:24:54.583 } 00:24:54.583 } 00:24:54.583 ] 00:24:54.583 } 00:24:54.583 ] 00:24:54.583 }' 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=406058 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 406058 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 406058 ']' 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.583 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.583 [2024-12-16 12:46:20.519608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:54.583 [2024-12-16 12:46:20.519654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.583 [2024-12-16 12:46:20.588687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.583 [2024-12-16 12:46:20.622939] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.583 [2024-12-16 12:46:20.622980] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.583 [2024-12-16 12:46:20.622988] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.583 [2024-12-16 12:46:20.622995] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.583 [2024-12-16 12:46:20.623000] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.583 [2024-12-16 12:46:20.623068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.842 [2024-12-16 12:46:20.849944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.842 [2024-12-16 12:46:20.881846] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.842 [2024-12-16 12:46:20.882070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=406300 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 406300 /var/tmp/bdevperf.sock 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 406300 ']' 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:55.410 "subsystems": [ 00:24:55.410 { 00:24:55.410 "subsystem": "keyring", 00:24:55.410 "config": [ 00:24:55.410 { 00:24:55.410 "method": "keyring_file_add_key", 00:24:55.410 "params": { 00:24:55.410 "name": "key0", 00:24:55.410 "path": "/tmp/tmp.K7nSVjnHlu" 00:24:55.410 } 00:24:55.410 } 00:24:55.410 ] 00:24:55.410 }, 00:24:55.410 { 00:24:55.410 "subsystem": "iobuf", 00:24:55.410 "config": [ 00:24:55.410 { 00:24:55.410 "method": "iobuf_set_options", 00:24:55.410 "params": { 00:24:55.410 "small_pool_count": 8192, 00:24:55.410 "large_pool_count": 1024, 00:24:55.410 "small_bufsize": 8192, 00:24:55.410 "large_bufsize": 135168 00:24:55.410 } 00:24:55.410 } 00:24:55.410 ] 00:24:55.410 }, 00:24:55.410 { 00:24:55.410 "subsystem": "sock", 00:24:55.410 "config": [ 00:24:55.410 { 00:24:55.410 "method": "sock_set_default_impl", 00:24:55.410 "params": { 00:24:55.410 "impl_name": "posix" 00:24:55.410 } 00:24:55.410 }, 00:24:55.410 { 00:24:55.410 "method": "sock_impl_set_options", 00:24:55.410 "params": { 00:24:55.410 "impl_name": "ssl", 00:24:55.410 "recv_buf_size": 4096, 00:24:55.410 "send_buf_size": 4096, 00:24:55.410 "enable_recv_pipe": true, 00:24:55.410 "enable_quickack": false, 00:24:55.410 "enable_placement_id": 0, 00:24:55.410 "enable_zerocopy_send_server": true, 00:24:55.410 "enable_zerocopy_send_client": false, 00:24:55.410 "zerocopy_threshold": 0, 00:24:55.410 "tls_version": 0, 00:24:55.410 "enable_ktls": false 00:24:55.410 } 00:24:55.410 }, 00:24:55.410 { 00:24:55.410 "method": "sock_impl_set_options", 00:24:55.410 "params": { 00:24:55.410 "impl_name": "posix", 00:24:55.410 "recv_buf_size": 2097152, 00:24:55.410 "send_buf_size": 2097152, 00:24:55.410 "enable_recv_pipe": true, 00:24:55.410 "enable_quickack": false, 00:24:55.410 "enable_placement_id": 0, 00:24:55.410 "enable_zerocopy_send_server": true, 00:24:55.410 "enable_zerocopy_send_client": false, 00:24:55.410 "zerocopy_threshold": 0, 00:24:55.410 "tls_version": 0, 00:24:55.410 "enable_ktls": false 00:24:55.410 } 00:24:55.410 } 00:24:55.410 ] 00:24:55.410 }, 00:24:55.410 { 00:24:55.410 "subsystem": "vmd", 00:24:55.410 "config": [] 00:24:55.410 }, 00:24:55.410 { 00:24:55.410 "subsystem": "accel", 00:24:55.410 "config": [ 00:24:55.410 { 00:24:55.410 "method": "accel_set_options", 00:24:55.410 "params": { 00:24:55.410 "small_cache_size": 128, 00:24:55.410 "large_cache_size": 16, 00:24:55.410 "task_count": 2048, 00:24:55.411 "sequence_count": 2048, 00:24:55.411 "buf_count": 2048 00:24:55.411 } 00:24:55.411 } 00:24:55.411 ] 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "subsystem": "bdev", 00:24:55.411 "config": [ 00:24:55.411 { 00:24:55.411 "method": "bdev_set_options", 00:24:55.411 "params": { 00:24:55.411 "bdev_io_pool_size": 65535, 00:24:55.411 "bdev_io_cache_size": 256, 00:24:55.411 "bdev_auto_examine": true, 00:24:55.411 "iobuf_small_cache_size": 128, 00:24:55.411 "iobuf_large_cache_size": 16 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_raid_set_options", 00:24:55.411 "params": { 00:24:55.411 "process_window_size_kb": 1024, 00:24:55.411 "process_max_bandwidth_mb_sec": 0 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_iscsi_set_options", 00:24:55.411 "params": { 00:24:55.411 "timeout_sec": 30 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_nvme_set_options", 00:24:55.411 "params": { 00:24:55.411 "action_on_timeout": "none", 00:24:55.411 "timeout_us": 0, 00:24:55.411 "timeout_admin_us": 0, 00:24:55.411 "keep_alive_timeout_ms": 10000, 00:24:55.411 "arbitration_burst": 0, 00:24:55.411 "low_priority_weight": 0, 00:24:55.411 "medium_priority_weight": 0, 00:24:55.411 "high_priority_weight": 0, 00:24:55.411 "nvme_adminq_poll_period_us": 10000, 00:24:55.411 "nvme_ioq_poll_period_us": 0, 00:24:55.411 "io_queue_requests": 512, 00:24:55.411 "delay_cmd_submit": true, 00:24:55.411 "transport_retry_count": 4, 00:24:55.411 "bdev_retry_count": 3, 00:24:55.411 "transport_ack_timeout": 0, 00:24:55.411 "ctrlr_loss_timeout_sec": 0, 00:24:55.411 "reconnect_delay_sec": 0, 00:24:55.411 "fast_io_fail_timeout_sec": 0, 00:24:55.411 "disable_auto_failback": false, 00:24:55.411 "generate_uuids": false, 00:24:55.411 "transport_tos": 0, 00:24:55.411 "nvme_error_stat": false, 00:24:55.411 "rdma_srq_size": 0, 00:24:55.411 "io_path_stat": false, 00:24:55.411 "allow_accel_sequence": false, 00:24:55.411 "rdma_max_cq_size": 0, 00:24:55.411 "rdma_cm_event_timeout_ms": 0, 00:24:55.411 "dhchap_digests": [ 00:24:55.411 "sha256", 00:24:55.411 "sha384", 00:24:55.411 "sha512" 00:24:55.411 ], 00:24:55.411 "dhchap_dhgroups": [ 00:24:55.411 "null", 00:24:55.411 "ffdhe2048", 00:24:55.411 "ffdhe3072", 00:24:55.411 "ffdhe4096", 00:24:55.411 "ffdhe6144", 00:24:55.411 "ffdhe8192" 00:24:55.411 ] 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_nvme_attach_controller", 00:24:55.411 "params": { 00:24:55.411 "name": "nvme0", 00:24:55.411 "trtype": "TCP", 00:24:55.411 "adrfam": "IPv4", 00:24:55.411 "traddr": "10.0.0.2", 00:24:55.411 "trsvcid": "4420", 00:24:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.411 "prchk_reftag": false, 00:24:55.411 "prchk_guard": false, 00:24:55.411 "ctrlr_loss_timeout_sec": 0, 00:24:55.411 "reconnect_delay_sec": 0, 00:24:55.411 "fast_io_fail_timeout_sec": 0, 00:24:55.411 "psk": "key0", 00:24:55.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.411 "hdgst": false, 00:24:55.411 "ddgst": false 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_nvme_set_hotplug", 00:24:55.411 "params": { 00:24:55.411 "period_us": 100000, 00:24:55.411 "enable": false 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_enable_histogram", 00:24:55.411 "params": { 00:24:55.411 "name": "nvme0n1", 00:24:55.411 "enable": true 00:24:55.411 } 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "method": "bdev_wait_for_examine" 00:24:55.411 } 00:24:55.411 ] 00:24:55.411 }, 00:24:55.411 { 00:24:55.411 "subsystem": "nbd", 00:24:55.411 "config": [] 00:24:55.411 } 00:24:55.411 ] 00:24:55.411 }' 00:24:55.411 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.411 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.411 [2024-12-16 12:46:21.434820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:55.411 [2024-12-16 12:46:21.434868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406300 ] 00:24:55.670 [2024-12-16 12:46:21.502635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.670 [2024-12-16 12:46:21.541290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.670 [2024-12-16 12:46:21.687143] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.238 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.238 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:56.238 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.238 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:56.498 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.498 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.757 Running I/O for 1 seconds... 00:24:57.695 4735.00 IOPS, 18.50 MiB/s 00:24:57.695 Latency(us) 00:24:57.695 [2024-12-16T11:46:23.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.695 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:57.695 Verification LBA range: start 0x0 length 0x2000 00:24:57.695 nvme0n1 : 1.02 4789.64 18.71 0.00 0.00 26546.11 5242.88 28086.86 00:24:57.695 [2024-12-16T11:46:23.762Z] =================================================================================================================== 00:24:57.695 [2024-12-16T11:46:23.762Z] Total : 4789.64 18.71 0.00 0.00 26546.11 5242.88 28086.86 00:24:57.695 { 00:24:57.695 "results": [ 00:24:57.695 { 00:24:57.695 "job": "nvme0n1", 00:24:57.695 "core_mask": "0x2", 00:24:57.695 "workload": "verify", 00:24:57.695 "status": "finished", 00:24:57.695 "verify_range": { 00:24:57.695 "start": 0, 00:24:57.695 "length": 8192 00:24:57.695 }, 00:24:57.695 "queue_depth": 128, 00:24:57.695 "io_size": 4096, 00:24:57.695 "runtime": 1.015317, 00:24:57.695 "iops": 4789.637128108759, 00:24:57.695 "mibps": 18.70952003167484, 00:24:57.695 "io_failed": 0, 00:24:57.695 "io_timeout": 0, 00:24:57.695 "avg_latency_us": 26546.10689306033, 00:24:57.695 "min_latency_us": 5242.88, 00:24:57.695 "max_latency_us": 28086.85714285714 00:24:57.695 } 00:24:57.695 ], 00:24:57.695 "core_count": 1 00:24:57.695 } 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:57.695 nvmf_trace.0 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 406300 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 406300 ']' 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 406300 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406300 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406300' 00:24:57.695 killing process with pid 406300 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 406300 00:24:57.695 Received shutdown signal, test time was about 1.000000 seconds 00:24:57.695 00:24:57.695 Latency(us) 00:24:57.695 [2024-12-16T11:46:23.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.695 [2024-12-16T11:46:23.762Z] =================================================================================================================== 00:24:57.695 [2024-12-16T11:46:23.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.695 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 406300 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.955 rmmod nvme_tcp 00:24:57.955 rmmod nvme_fabrics 00:24:57.955 rmmod nvme_keyring 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 406058 ']' 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 406058 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 406058 ']' 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 406058 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.955 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406058 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406058' 00:24:58.214 killing process with pid 406058 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 406058 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 406058 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.214 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Y0dmvEgH0d /tmp/tmp.6O0RWZeqGT /tmp/tmp.K7nSVjnHlu 00:25:00.752 00:25:00.752 real 1m19.147s 00:25:00.752 user 2m1.708s 00:25:00.752 sys 0m29.624s 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.752 ************************************ 00:25:00.752 END TEST nvmf_tls 00:25:00.752 ************************************ 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.752 ************************************ 00:25:00.752 START TEST nvmf_fips 00:25:00.752 ************************************ 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:00.752 * Looking for test storage... 00:25:00.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:00.752 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:00.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.753 --rc genhtml_branch_coverage=1 00:25:00.753 --rc genhtml_function_coverage=1 00:25:00.753 --rc genhtml_legend=1 00:25:00.753 --rc geninfo_all_blocks=1 00:25:00.753 --rc geninfo_unexecuted_blocks=1 00:25:00.753 00:25:00.753 ' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:00.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.753 --rc genhtml_branch_coverage=1 00:25:00.753 --rc genhtml_function_coverage=1 00:25:00.753 --rc genhtml_legend=1 00:25:00.753 --rc geninfo_all_blocks=1 00:25:00.753 --rc geninfo_unexecuted_blocks=1 00:25:00.753 00:25:00.753 ' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:00.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.753 --rc genhtml_branch_coverage=1 00:25:00.753 --rc genhtml_function_coverage=1 00:25:00.753 --rc genhtml_legend=1 00:25:00.753 --rc geninfo_all_blocks=1 00:25:00.753 --rc geninfo_unexecuted_blocks=1 00:25:00.753 00:25:00.753 ' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:00.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.753 --rc genhtml_branch_coverage=1 00:25:00.753 --rc genhtml_function_coverage=1 00:25:00.753 --rc genhtml_legend=1 00:25:00.753 --rc geninfo_all_blocks=1 00:25:00.753 --rc geninfo_unexecuted_blocks=1 00:25:00.753 00:25:00.753 ' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:00.753 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:00.754 Error setting digest 00:25:00.754 40E2C5AF2E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:00.754 40E2C5AF2E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.754 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.325 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:07.326 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:07.326 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:07.326 Found net devices under 0000:af:00.0: cvl_0_0 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:07.326 Found net devices under 0000:af:00.1: cvl_0_1 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:25:07.326 00:25:07.326 --- 10.0.0.2 ping statistics --- 00:25:07.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.326 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:25:07.326 00:25:07.326 --- 10.0.0.1 ping statistics --- 00:25:07.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.326 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=410254 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 410254 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:07.326 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 410254 ']' 00:25:07.327 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.327 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.327 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.327 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.327 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:07.327 [2024-12-16 12:46:32.786207] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:07.327 [2024-12-16 12:46:32.786255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.327 [2024-12-16 12:46:32.854559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.327 [2024-12-16 12:46:32.892442] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.327 [2024-12-16 12:46:32.892480] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.327 [2024-12-16 12:46:32.892488] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.327 [2024-12-16 12:46:32.892494] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.327 [2024-12-16 12:46:32.892499] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.327 [2024-12-16 12:46:32.892516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.SSY 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.SSY 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.SSY 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.SSY 00:25:07.586 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:07.846 [2024-12-16 12:46:33.814810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.846 [2024-12-16 12:46:33.830821] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.846 [2024-12-16 12:46:33.831023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.846 malloc0 00:25:07.846 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:07.846 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=410497 00:25:07.846 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:07.846 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 410497 /var/tmp/bdevperf.sock 00:25:08.104 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 410497 ']' 00:25:08.104 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.104 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.105 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.105 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.105 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:08.105 [2024-12-16 12:46:33.973159] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:08.105 [2024-12-16 12:46:33.973211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410497 ] 00:25:08.105 [2024-12-16 12:46:34.041829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.105 [2024-12-16 12:46:34.081500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.105 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.105 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:08.105 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.SSY 00:25:08.364 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:08.623 [2024-12-16 12:46:34.530954] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.623 TLSTESTn1 00:25:08.623 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.881 Running I/O for 10 seconds... 00:25:10.754 5476.00 IOPS, 21.39 MiB/s [2024-12-16T11:46:37.756Z] 5295.50 IOPS, 20.69 MiB/s [2024-12-16T11:46:39.132Z] 5420.33 IOPS, 21.17 MiB/s [2024-12-16T11:46:40.067Z] 5368.75 IOPS, 20.97 MiB/s [2024-12-16T11:46:41.002Z] 5436.20 IOPS, 21.24 MiB/s [2024-12-16T11:46:41.937Z] 5358.33 IOPS, 20.93 MiB/s [2024-12-16T11:46:42.870Z] 5335.57 IOPS, 20.84 MiB/s [2024-12-16T11:46:43.805Z] 5299.88 IOPS, 20.70 MiB/s [2024-12-16T11:46:45.184Z] 5283.56 IOPS, 20.64 MiB/s [2024-12-16T11:46:45.184Z] 5261.80 IOPS, 20.55 MiB/s 00:25:19.117 Latency(us) 00:25:19.117 [2024-12-16T11:46:45.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:19.117 Verification LBA range: start 0x0 length 0x2000 00:25:19.117 TLSTESTn1 : 10.02 5265.61 20.57 0.00 0.00 24272.57 5211.67 39446.43 00:25:19.117 [2024-12-16T11:46:45.184Z] =================================================================================================================== 00:25:19.117 [2024-12-16T11:46:45.184Z] Total : 5265.61 20.57 0.00 0.00 24272.57 5211.67 39446.43 00:25:19.117 { 00:25:19.117 "results": [ 00:25:19.117 { 00:25:19.117 "job": "TLSTESTn1", 00:25:19.117 "core_mask": "0x4", 00:25:19.117 "workload": "verify", 00:25:19.117 "status": "finished", 00:25:19.117 "verify_range": { 00:25:19.117 "start": 0, 00:25:19.117 "length": 8192 00:25:19.117 }, 00:25:19.117 "queue_depth": 128, 00:25:19.117 "io_size": 4096, 00:25:19.117 "runtime": 10.01689, 00:25:19.117 "iops": 5265.606390805929, 00:25:19.117 "mibps": 20.56877496408566, 00:25:19.117 "io_failed": 0, 00:25:19.117 "io_timeout": 0, 00:25:19.117 "avg_latency_us": 24272.57342906798, 00:25:19.117 "min_latency_us": 5211.672380952381, 00:25:19.117 "max_latency_us": 39446.43047619048 00:25:19.117 } 00:25:19.117 ], 00:25:19.117 "core_count": 1 00:25:19.117 } 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:19.117 nvmf_trace.0 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 410497 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 410497 ']' 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 410497 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 410497 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 410497' 00:25:19.117 killing process with pid 410497 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 410497 00:25:19.117 Received shutdown signal, test time was about 10.000000 seconds 00:25:19.117 00:25:19.117 Latency(us) 00:25:19.117 [2024-12-16T11:46:45.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.117 [2024-12-16T11:46:45.184Z] =================================================================================================================== 00:25:19.117 [2024-12-16T11:46:45.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.117 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 410497 00:25:19.117 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:19.117 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:19.117 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:19.117 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.118 rmmod nvme_tcp 00:25:19.118 rmmod nvme_fabrics 00:25:19.118 rmmod nvme_keyring 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 410254 ']' 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 410254 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 410254 ']' 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 410254 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.118 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 410254 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 410254' 00:25:19.449 killing process with pid 410254 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 410254 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 410254 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.449 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.SSY 00:25:21.456 00:25:21.456 real 0m21.114s 00:25:21.456 user 0m22.238s 00:25:21.456 sys 0m9.445s 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:21.456 ************************************ 00:25:21.456 END TEST nvmf_fips 00:25:21.456 ************************************ 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:21.456 ************************************ 00:25:21.456 START TEST nvmf_control_msg_list 00:25:21.456 ************************************ 00:25:21.456 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:21.769 * Looking for test storage... 00:25:21.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:21.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.769 --rc genhtml_branch_coverage=1 00:25:21.769 --rc genhtml_function_coverage=1 00:25:21.769 --rc genhtml_legend=1 00:25:21.769 --rc geninfo_all_blocks=1 00:25:21.769 --rc geninfo_unexecuted_blocks=1 00:25:21.769 00:25:21.769 ' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:21.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.769 --rc genhtml_branch_coverage=1 00:25:21.769 --rc genhtml_function_coverage=1 00:25:21.769 --rc genhtml_legend=1 00:25:21.769 --rc geninfo_all_blocks=1 00:25:21.769 --rc geninfo_unexecuted_blocks=1 00:25:21.769 00:25:21.769 ' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:21.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.769 --rc genhtml_branch_coverage=1 00:25:21.769 --rc genhtml_function_coverage=1 00:25:21.769 --rc genhtml_legend=1 00:25:21.769 --rc geninfo_all_blocks=1 00:25:21.769 --rc geninfo_unexecuted_blocks=1 00:25:21.769 00:25:21.769 ' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:21.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.769 --rc genhtml_branch_coverage=1 00:25:21.769 --rc genhtml_function_coverage=1 00:25:21.769 --rc genhtml_legend=1 00:25:21.769 --rc geninfo_all_blocks=1 00:25:21.769 --rc geninfo_unexecuted_blocks=1 00:25:21.769 00:25:21.769 ' 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:21.769 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.770 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.252 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.252 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.252 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.253 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.253 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.253 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.512 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.512 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.512 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.512 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.512 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.512 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:25:27.771 00:25:27.771 --- 10.0.0.2 ping statistics --- 00:25:27.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.771 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:25:27.771 00:25:27.771 --- 10.0.0.1 ping statistics --- 00:25:27.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.771 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:27.771 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=415715 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 415715 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 415715 ']' 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.772 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.772 [2024-12-16 12:46:53.772257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:27.772 [2024-12-16 12:46:53.772307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.031 [2024-12-16 12:46:53.845914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.031 [2024-12-16 12:46:53.885249] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.031 [2024-12-16 12:46:53.885287] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.031 [2024-12-16 12:46:53.885294] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.031 [2024-12-16 12:46:53.885300] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.031 [2024-12-16 12:46:53.885306] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.031 [2024-12-16 12:46:53.885322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.031 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:28.031 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:28.031 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:28.031 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.031 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.031 [2024-12-16 12:46:54.014845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.031 Malloc0 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.031 [2024-12-16 12:46:54.073307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=415786 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=415787 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=415788 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 415786 00:25:28.031 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.291 [2024-12-16 12:46:54.147792] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:28.291 [2024-12-16 12:46:54.147964] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:28.291 [2024-12-16 12:46:54.157582] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:29.227 Initializing NVMe Controllers 00:25:29.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:29.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:29.227 Initialization complete. Launching workers. 00:25:29.227 ======================================================== 00:25:29.227 Latency(us) 00:25:29.227 Device Information : IOPS MiB/s Average min max 00:25:29.227 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41051.79 40746.29 42025.01 00:25:29.227 ======================================================== 00:25:29.227 Total : 25.00 0.10 41051.79 40746.29 42025.01 00:25:29.227 00:25:29.486 Initializing NVMe Controllers 00:25:29.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:29.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:29.486 Initialization complete. Launching workers. 00:25:29.486 ======================================================== 00:25:29.486 Latency(us) 00:25:29.486 Device Information : IOPS MiB/s Average min max 00:25:29.486 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41084.07 40526.17 41910.53 00:25:29.486 ======================================================== 00:25:29.486 Total : 25.00 0.10 41084.07 40526.17 41910.53 00:25:29.486 00:25:29.486 Initializing NVMe Controllers 00:25:29.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:29.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:29.486 Initialization complete. Launching workers. 00:25:29.486 ======================================================== 00:25:29.486 Latency(us) 00:25:29.486 Device Information : IOPS MiB/s Average min max 00:25:29.486 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3898.00 15.23 256.15 152.12 382.76 00:25:29.486 ======================================================== 00:25:29.486 Total : 3898.00 15.23 256.15 152.12 382.76 00:25:29.486 00:25:29.486 [2024-12-16 12:46:55.331567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c420d0 is same with the state(6) to be set 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 415787 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 415788 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.486 rmmod nvme_tcp 00:25:29.486 rmmod nvme_fabrics 00:25:29.486 rmmod nvme_keyring 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 415715 ']' 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 415715 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 415715 ']' 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 415715 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415715 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415715' 00:25:29.486 killing process with pid 415715 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 415715 00:25:29.486 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 415715 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.746 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.652 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.652 00:25:31.652 real 0m10.203s 00:25:31.652 user 0m6.796s 00:25:31.652 sys 0m5.258s 00:25:31.652 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.652 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:31.913 ************************************ 00:25:31.913 END TEST nvmf_control_msg_list 00:25:31.913 ************************************ 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:31.913 ************************************ 00:25:31.913 START TEST nvmf_wait_for_buf 00:25:31.913 ************************************ 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:31.913 * Looking for test storage... 00:25:31.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.913 --rc genhtml_branch_coverage=1 00:25:31.913 --rc genhtml_function_coverage=1 00:25:31.913 --rc genhtml_legend=1 00:25:31.913 --rc geninfo_all_blocks=1 00:25:31.913 --rc geninfo_unexecuted_blocks=1 00:25:31.913 00:25:31.913 ' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.913 --rc genhtml_branch_coverage=1 00:25:31.913 --rc genhtml_function_coverage=1 00:25:31.913 --rc genhtml_legend=1 00:25:31.913 --rc geninfo_all_blocks=1 00:25:31.913 --rc geninfo_unexecuted_blocks=1 00:25:31.913 00:25:31.913 ' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.913 --rc genhtml_branch_coverage=1 00:25:31.913 --rc genhtml_function_coverage=1 00:25:31.913 --rc genhtml_legend=1 00:25:31.913 --rc geninfo_all_blocks=1 00:25:31.913 --rc geninfo_unexecuted_blocks=1 00:25:31.913 00:25:31.913 ' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.913 --rc genhtml_branch_coverage=1 00:25:31.913 --rc genhtml_function_coverage=1 00:25:31.913 --rc genhtml_legend=1 00:25:31.913 --rc geninfo_all_blocks=1 00:25:31.913 --rc geninfo_unexecuted_blocks=1 00:25:31.913 00:25:31.913 ' 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.913 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.914 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.173 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:32.173 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:32.173 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.173 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:38.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:38.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:38.747 Found net devices under 0000:af:00.0: cvl_0_0 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:38.747 Found net devices under 0000:af:00.1: cvl_0_1 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.747 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:25:38.748 00:25:38.748 --- 10.0.0.2 ping statistics --- 00:25:38.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.748 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:25:38.748 00:25:38.748 --- 10.0.0.1 ping statistics --- 00:25:38.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.748 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=419471 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 419471 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 419471 ']' 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.748 12:47:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 [2024-12-16 12:47:03.871081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:38.748 [2024-12-16 12:47:03.871133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.748 [2024-12-16 12:47:03.941310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.748 [2024-12-16 12:47:03.980357] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.748 [2024-12-16 12:47:03.980396] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.748 [2024-12-16 12:47:03.980403] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.748 [2024-12-16 12:47:03.980408] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.748 [2024-12-16 12:47:03.980413] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.748 [2024-12-16 12:47:03.980429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 Malloc0 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 [2024-12-16 12:47:04.158418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 [2024-12-16 12:47:04.182599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.748 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.748 [2024-12-16 12:47:04.256182] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:39.685 Initializing NVMe Controllers 00:25:39.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:39.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:39.685 Initialization complete. Launching workers. 00:25:39.685 ======================================================== 00:25:39.685 Latency(us) 00:25:39.685 Device Information : IOPS MiB/s Average min max 00:25:39.685 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 126.80 15.85 32652.43 7287.14 63849.07 00:25:39.685 ======================================================== 00:25:39.685 Total : 126.80 15.85 32652.43 7287.14 63849.07 00:25:39.685 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.685 rmmod nvme_tcp 00:25:39.685 rmmod nvme_fabrics 00:25:39.685 rmmod nvme_keyring 00:25:39.685 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 419471 ']' 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 419471 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 419471 ']' 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 419471 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419471 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419471' 00:25:39.943 killing process with pid 419471 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 419471 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 419471 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.943 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.944 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.944 12:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.480 00:25:42.480 real 0m10.282s 00:25:42.480 user 0m3.873s 00:25:42.480 sys 0m4.848s 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:42.480 ************************************ 00:25:42.480 END TEST nvmf_wait_for_buf 00:25:42.480 ************************************ 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:42.480 ************************************ 00:25:42.480 START TEST nvmf_fuzz 00:25:42.480 ************************************ 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.480 * Looking for test storage... 00:25:42.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:42.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.480 --rc genhtml_branch_coverage=1 00:25:42.480 --rc genhtml_function_coverage=1 00:25:42.480 --rc genhtml_legend=1 00:25:42.480 --rc geninfo_all_blocks=1 00:25:42.480 --rc geninfo_unexecuted_blocks=1 00:25:42.480 00:25:42.480 ' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:42.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.480 --rc genhtml_branch_coverage=1 00:25:42.480 --rc genhtml_function_coverage=1 00:25:42.480 --rc genhtml_legend=1 00:25:42.480 --rc geninfo_all_blocks=1 00:25:42.480 --rc geninfo_unexecuted_blocks=1 00:25:42.480 00:25:42.480 ' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:42.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.480 --rc genhtml_branch_coverage=1 00:25:42.480 --rc genhtml_function_coverage=1 00:25:42.480 --rc genhtml_legend=1 00:25:42.480 --rc geninfo_all_blocks=1 00:25:42.480 --rc geninfo_unexecuted_blocks=1 00:25:42.480 00:25:42.480 ' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:42.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.480 --rc genhtml_branch_coverage=1 00:25:42.480 --rc genhtml_function_coverage=1 00:25:42.480 --rc genhtml_legend=1 00:25:42.480 --rc geninfo_all_blocks=1 00:25:42.480 --rc geninfo_unexecuted_blocks=1 00:25:42.480 00:25:42.480 ' 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:42.480 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.481 12:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:49.054 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:49.054 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:49.055 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:49.055 Found net devices under 0000:af:00.0: cvl_0_0 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:49.055 Found net devices under 0000:af:00.1: cvl_0_1 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.055 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:25:49.055 00:25:49.055 --- 10.0.0.2 ping statistics --- 00:25:49.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.055 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:25:49.055 00:25:49.055 --- 10.0.0.1 ping statistics --- 00:25:49.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.055 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=423178 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 423178 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 423178 ']' 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.055 Malloc0 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.055 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:49.056 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:21.141 Fuzzing completed. Shutting down the fuzz application 00:26:21.141 00:26:21.141 Dumping successful admin opcodes: 00:26:21.141 8, 9, 10, 24, 00:26:21.141 Dumping successful io opcodes: 00:26:21.141 0, 9, 00:26:21.141 NS: 0x200003aeff00 I/O qp, Total commands completed: 1024897, total successful commands: 6017, random_seed: 3425028224 00:26:21.141 NS: 0x200003aeff00 admin qp, Total commands completed: 130030, total successful commands: 1058, random_seed: 1471872576 00:26:21.141 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:21.141 Fuzzing completed. Shutting down the fuzz application 00:26:21.141 00:26:21.141 Dumping successful admin opcodes: 00:26:21.141 24, 00:26:21.141 Dumping successful io opcodes: 00:26:21.141 00:26:21.141 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1879621626 00:26:21.141 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1879687166 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:21.141 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:21.142 rmmod nvme_tcp 00:26:21.142 rmmod nvme_fabrics 00:26:21.142 rmmod nvme_keyring 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 423178 ']' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 423178 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 423178 ']' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 423178 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423178 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423178' 00:26:21.142 killing process with pid 423178 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 423178 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 423178 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.142 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.520 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.520 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:22.520 00:26:22.520 real 0m40.495s 00:26:22.520 user 0m54.133s 00:26:22.520 sys 0m15.613s 00:26:22.520 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.520 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:22.520 ************************************ 00:26:22.520 END TEST nvmf_fuzz 00:26:22.520 ************************************ 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:22.780 ************************************ 00:26:22.780 START TEST nvmf_multiconnection 00:26:22.780 ************************************ 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:22.780 * Looking for test storage... 00:26:22.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:22.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.780 --rc genhtml_branch_coverage=1 00:26:22.780 --rc genhtml_function_coverage=1 00:26:22.780 --rc genhtml_legend=1 00:26:22.780 --rc geninfo_all_blocks=1 00:26:22.780 --rc geninfo_unexecuted_blocks=1 00:26:22.780 00:26:22.780 ' 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:22.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.780 --rc genhtml_branch_coverage=1 00:26:22.780 --rc genhtml_function_coverage=1 00:26:22.780 --rc genhtml_legend=1 00:26:22.780 --rc geninfo_all_blocks=1 00:26:22.780 --rc geninfo_unexecuted_blocks=1 00:26:22.780 00:26:22.780 ' 00:26:22.780 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:22.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.780 --rc genhtml_branch_coverage=1 00:26:22.781 --rc genhtml_function_coverage=1 00:26:22.781 --rc genhtml_legend=1 00:26:22.781 --rc geninfo_all_blocks=1 00:26:22.781 --rc geninfo_unexecuted_blocks=1 00:26:22.781 00:26:22.781 ' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.781 --rc genhtml_branch_coverage=1 00:26:22.781 --rc genhtml_function_coverage=1 00:26:22.781 --rc genhtml_legend=1 00:26:22.781 --rc geninfo_all_blocks=1 00:26:22.781 --rc geninfo_unexecuted_blocks=1 00:26:22.781 00:26:22.781 ' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.781 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:29.373 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:29.373 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:29.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:29.374 Found net devices under 0000:af:00.0: cvl_0_0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:29.374 Found net devices under 0000:af:00.1: cvl_0_1 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:26:29.374 00:26:29.374 --- 10.0.0.2 ping statistics --- 00:26:29.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.374 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:29.374 00:26:29.374 --- 10.0.0.1 ping statistics --- 00:26:29.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.374 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=431598 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 431598 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 431598 ']' 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.374 [2024-12-16 12:47:54.762607] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:29.374 [2024-12-16 12:47:54.762647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.374 [2024-12-16 12:47:54.839486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.374 [2024-12-16 12:47:54.881388] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.374 [2024-12-16 12:47:54.881427] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.374 [2024-12-16 12:47:54.881434] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.374 [2024-12-16 12:47:54.881441] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.374 [2024-12-16 12:47:54.881446] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.374 [2024-12-16 12:47:54.881515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.374 [2024-12-16 12:47:54.881623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.374 [2024-12-16 12:47:54.881641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.374 [2024-12-16 12:47:54.881649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.374 12:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.374 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 [2024-12-16 12:47:55.031930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 Malloc1 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 [2024-12-16 12:47:55.083191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 Malloc2 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 Malloc3 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 Malloc4 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 Malloc5 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 Malloc6 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.375 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 Malloc7 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 Malloc8 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 Malloc9 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 Malloc10 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 Malloc11 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.635 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:29.636 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.636 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.636 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.636 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:29.636 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.636 12:47:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:30.573 12:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:30.573 12:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:30.573 12:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.573 12:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:30.573 12:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.109 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:34.045 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:34.045 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:34.045 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:34.045 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:34.045 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.952 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:37.332 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:37.332 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:37.332 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.332 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:37.332 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:39.239 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:39.239 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:39.239 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:39.239 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:39.239 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.239 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:39.239 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.239 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:40.174 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:40.174 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:40.174 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.174 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:40.174 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.713 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:43.650 12:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:43.650 12:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:43.650 12:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.650 12:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:43.650 12:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.558 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:46.939 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:46.939 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:46.939 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.939 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:46.939 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.845 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:50.223 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:50.223 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:50.223 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:50.223 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:50.223 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:52.129 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:52.129 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:52.129 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:52.387 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:52.387 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:52.387 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:52.387 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.387 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:53.766 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:53.766 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:53.766 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.766 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:53.766 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.671 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:57.050 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:57.050 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:57.050 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.050 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:57.050 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.956 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:00.334 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:00.334 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:00.334 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.334 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:00.334 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.238 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:04.147 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:04.147 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:04.147 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.147 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:04.147 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:06.051 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:06.051 [global] 00:27:06.051 thread=1 00:27:06.051 invalidate=1 00:27:06.051 rw=read 00:27:06.051 time_based=1 00:27:06.051 runtime=10 00:27:06.051 ioengine=libaio 00:27:06.051 direct=1 00:27:06.051 bs=262144 00:27:06.051 iodepth=64 00:27:06.051 norandommap=1 00:27:06.051 numjobs=1 00:27:06.051 00:27:06.051 [job0] 00:27:06.051 filename=/dev/nvme0n1 00:27:06.051 [job1] 00:27:06.051 filename=/dev/nvme10n1 00:27:06.051 [job2] 00:27:06.051 filename=/dev/nvme11n1 00:27:06.051 [job3] 00:27:06.051 filename=/dev/nvme2n1 00:27:06.051 [job4] 00:27:06.051 filename=/dev/nvme3n1 00:27:06.051 [job5] 00:27:06.051 filename=/dev/nvme4n1 00:27:06.051 [job6] 00:27:06.051 filename=/dev/nvme5n1 00:27:06.051 [job7] 00:27:06.051 filename=/dev/nvme6n1 00:27:06.051 [job8] 00:27:06.051 filename=/dev/nvme7n1 00:27:06.051 [job9] 00:27:06.051 filename=/dev/nvme8n1 00:27:06.051 [job10] 00:27:06.051 filename=/dev/nvme9n1 00:27:06.051 Could not set queue depth (nvme0n1) 00:27:06.051 Could not set queue depth (nvme10n1) 00:27:06.051 Could not set queue depth (nvme11n1) 00:27:06.051 Could not set queue depth (nvme2n1) 00:27:06.051 Could not set queue depth (nvme3n1) 00:27:06.051 Could not set queue depth (nvme4n1) 00:27:06.051 Could not set queue depth (nvme5n1) 00:27:06.051 Could not set queue depth (nvme6n1) 00:27:06.051 Could not set queue depth (nvme7n1) 00:27:06.051 Could not set queue depth (nvme8n1) 00:27:06.051 Could not set queue depth (nvme9n1) 00:27:06.310 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.310 fio-3.35 00:27:06.310 Starting 11 threads 00:27:18.523 00:27:18.523 job0: (groupid=0, jobs=1): err= 0: pid=438508: Mon Dec 16 12:48:42 2024 00:27:18.523 read: IOPS=450, BW=113MiB/s (118MB/s)(1129MiB/10030msec) 00:27:18.523 slat (usec): min=15, max=184876, avg=1833.05, stdev=8215.91 00:27:18.523 clat (usec): min=840, max=731579, avg=140228.15, stdev=134542.96 00:27:18.523 lat (usec): min=874, max=731615, avg=142061.20, stdev=135829.15 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 13], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 53], 00:27:18.524 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 97], 00:27:18.524 | 70.00th=[ 144], 80.00th=[ 230], 90.00th=[ 347], 95.00th=[ 426], 00:27:18.524 | 99.00th=[ 651], 99.50th=[ 676], 99.90th=[ 693], 99.95th=[ 718], 00:27:18.524 | 99.99th=[ 735] 00:27:18.524 bw ( KiB/s): min=30208, max=298496, per=14.72%, avg=113945.60, stdev=90616.69, samples=20 00:27:18.524 iops : min= 118, max= 1166, avg=445.10, stdev=353.97, samples=20 00:27:18.524 lat (usec) : 1000=0.02% 00:27:18.524 lat (msec) : 2=0.33%, 4=0.18%, 10=0.40%, 20=0.27%, 50=11.05% 00:27:18.524 lat (msec) : 100=48.18%, 250=21.27%, 500=15.15%, 750=3.15% 00:27:18.524 cpu : usr=0.15%, sys=1.85%, ctx=738, majf=0, minf=4097 00:27:18.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=4514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job1: (groupid=0, jobs=1): err= 0: pid=438509: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=478, BW=120MiB/s (125MB/s)(1210MiB/10106msec) 00:27:18.524 slat (usec): min=8, max=213509, avg=1770.19, stdev=9016.47 00:27:18.524 clat (usec): min=605, max=655501, avg=131789.03, stdev=129395.77 00:27:18.524 lat (usec): min=631, max=694564, avg=133559.22, stdev=131223.17 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 8], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 55], 00:27:18.524 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 73], 60.00th=[ 84], 00:27:18.524 | 70.00th=[ 130], 80.00th=[ 224], 90.00th=[ 359], 95.00th=[ 426], 00:27:18.524 | 99.00th=[ 518], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 567], 00:27:18.524 | 99.99th=[ 659] 00:27:18.524 bw ( KiB/s): min=31744, max=373760, per=15.79%, avg=122208.35, stdev=106184.70, samples=20 00:27:18.524 iops : min= 124, max= 1460, avg=477.35, stdev=414.80, samples=20 00:27:18.524 lat (usec) : 750=0.12%, 1000=0.04% 00:27:18.524 lat (msec) : 2=0.17%, 4=0.17%, 10=2.19%, 20=4.73%, 50=10.93% 00:27:18.524 lat (msec) : 100=46.61%, 250=18.64%, 500=14.84%, 750=1.55% 00:27:18.524 cpu : usr=0.11%, sys=1.14%, ctx=1185, majf=0, minf=3722 00:27:18.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=4838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job2: (groupid=0, jobs=1): err= 0: pid=438510: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=117, BW=29.4MiB/s (30.8MB/s)(298MiB/10152msec) 00:27:18.524 slat (usec): min=15, max=233535, avg=6386.51, stdev=24064.27 00:27:18.524 clat (usec): min=1743, max=932993, avg=537932.37, stdev=221996.39 00:27:18.524 lat (usec): min=1773, max=933035, avg=544318.88, stdev=224582.83 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 167], 20.00th=[ 384], 00:27:18.524 | 30.00th=[ 460], 40.00th=[ 531], 50.00th=[ 575], 60.00th=[ 634], 00:27:18.524 | 70.00th=[ 659], 80.00th=[ 726], 90.00th=[ 785], 95.00th=[ 827], 00:27:18.524 | 99.00th=[ 877], 99.50th=[ 894], 99.90th=[ 936], 99.95th=[ 936], 00:27:18.524 | 99.99th=[ 936] 00:27:18.524 bw ( KiB/s): min=18432, max=92672, per=3.73%, avg=28876.80, stdev=15668.06, samples=20 00:27:18.524 iops : min= 72, max= 362, avg=112.80, stdev=61.20, samples=20 00:27:18.524 lat (msec) : 2=0.17%, 4=0.25%, 10=4.19%, 20=2.94%, 100=0.59% 00:27:18.524 lat (msec) : 250=2.85%, 500=24.24%, 750=49.75%, 1000=15.02% 00:27:18.524 cpu : usr=0.05%, sys=0.49%, ctx=238, majf=0, minf=4097 00:27:18.524 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=1192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job3: (groupid=0, jobs=1): err= 0: pid=438511: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=298, BW=74.6MiB/s (78.2MB/s)(757MiB/10150msec) 00:27:18.524 slat (usec): min=16, max=217288, avg=2863.44, stdev=12667.38 00:27:18.524 clat (msec): min=32, max=851, avg=211.45, stdev=180.33 00:27:18.524 lat (msec): min=32, max=851, avg=214.32, stdev=182.35 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 78], 00:27:18.524 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 121], 60.00th=[ 157], 00:27:18.524 | 70.00th=[ 253], 80.00th=[ 355], 90.00th=[ 527], 95.00th=[ 609], 00:27:18.524 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 785], 99.95th=[ 852], 00:27:18.524 | 99.99th=[ 852] 00:27:18.524 bw ( KiB/s): min=26112, max=220160, per=9.80%, avg=75881.00, stdev=64363.99, samples=20 00:27:18.524 iops : min= 102, max= 860, avg=296.40, stdev=251.43, samples=20 00:27:18.524 lat (msec) : 50=0.50%, 100=41.41%, 250=27.41%, 500=18.20%, 750=12.19% 00:27:18.524 lat (msec) : 1000=0.30% 00:27:18.524 cpu : usr=0.01%, sys=1.37%, ctx=491, majf=0, minf=4097 00:27:18.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=3028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job4: (groupid=0, jobs=1): err= 0: pid=438512: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=536, BW=134MiB/s (141MB/s)(1362MiB/10154msec) 00:27:18.524 slat (usec): min=14, max=607627, avg=1707.85, stdev=13030.15 00:27:18.524 clat (msec): min=2, max=1259, avg=117.47, stdev=174.77 00:27:18.524 lat (msec): min=2, max=1259, avg=119.18, stdev=177.23 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:27:18.524 | 30.00th=[ 35], 40.00th=[ 40], 50.00th=[ 42], 60.00th=[ 46], 00:27:18.524 | 70.00th=[ 70], 80.00th=[ 161], 90.00th=[ 305], 95.00th=[ 435], 00:27:18.524 | 99.00th=[ 860], 99.50th=[ 927], 99.90th=[ 1011], 99.95th=[ 1011], 00:27:18.524 | 99.99th=[ 1267] 00:27:18.524 bw ( KiB/s): min=11776, max=512512, per=18.74%, avg=145030.74, stdev=162546.35, samples=19 00:27:18.524 iops : min= 46, max= 2002, avg=566.53, stdev=634.95, samples=19 00:27:18.524 lat (msec) : 4=0.09%, 10=0.37%, 20=1.08%, 50=63.26%, 100=7.80% 00:27:18.524 lat (msec) : 250=13.92%, 500=8.63%, 750=2.20%, 1000=2.42%, 2000=0.22% 00:27:18.524 cpu : usr=0.24%, sys=2.10%, ctx=763, majf=0, minf=4097 00:27:18.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=5446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job5: (groupid=0, jobs=1): err= 0: pid=438513: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=299, BW=74.9MiB/s (78.6MB/s)(757MiB/10105msec) 00:27:18.524 slat (usec): min=19, max=449936, avg=2920.93, stdev=17236.40 00:27:18.524 clat (usec): min=1531, max=982093, avg=210379.81, stdev=273388.85 00:27:18.524 lat (usec): min=1562, max=982135, avg=213300.74, stdev=277293.20 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 41], 00:27:18.524 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 45], 00:27:18.524 | 70.00th=[ 236], 80.00th=[ 506], 90.00th=[ 684], 95.00th=[ 793], 00:27:18.524 | 99.00th=[ 877], 99.50th=[ 894], 99.90th=[ 961], 99.95th=[ 969], 00:27:18.524 | 99.99th=[ 986] 00:27:18.524 bw ( KiB/s): min=16896, max=396056, per=9.81%, avg=75918.00, stdev=120833.22, samples=20 00:27:18.524 iops : min= 66, max= 1547, avg=296.55, stdev=471.99, samples=20 00:27:18.524 lat (msec) : 2=0.03%, 4=0.10%, 10=0.83%, 20=2.18%, 50=64.93% 00:27:18.524 lat (msec) : 100=0.69%, 250=1.39%, 500=9.51%, 750=13.34%, 1000=7.00% 00:27:18.524 cpu : usr=0.14%, sys=1.22%, ctx=382, majf=0, minf=4097 00:27:18.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=3028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job6: (groupid=0, jobs=1): err= 0: pid=438514: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=182, BW=45.5MiB/s (47.7MB/s)(462MiB/10157msec) 00:27:18.524 slat (usec): min=14, max=356495, avg=4385.80, stdev=23370.59 00:27:18.524 clat (usec): min=1317, max=1143.1k, avg=346741.02, stdev=222732.65 00:27:18.524 lat (usec): min=1367, max=1143.2k, avg=351126.82, stdev=226424.85 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 114], 20.00th=[ 157], 00:27:18.524 | 30.00th=[ 199], 40.00th=[ 241], 50.00th=[ 326], 60.00th=[ 384], 00:27:18.524 | 70.00th=[ 439], 80.00th=[ 489], 90.00th=[ 709], 95.00th=[ 818], 00:27:18.524 | 99.00th=[ 936], 99.50th=[ 978], 99.90th=[ 1011], 99.95th=[ 1150], 00:27:18.524 | 99.99th=[ 1150] 00:27:18.524 bw ( KiB/s): min=13824, max=128000, per=5.90%, avg=45696.00, stdev=30333.97, samples=20 00:27:18.524 iops : min= 54, max= 500, avg=178.50, stdev=118.49, samples=20 00:27:18.524 lat (msec) : 2=0.22%, 4=0.54%, 10=4.22%, 20=0.43%, 50=0.54% 00:27:18.524 lat (msec) : 100=2.11%, 250=33.04%, 500=41.43%, 750=8.92%, 1000=8.17% 00:27:18.524 lat (msec) : 2000=0.38% 00:27:18.524 cpu : usr=0.11%, sys=0.77%, ctx=340, majf=0, minf=4097 00:27:18.524 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:27:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.524 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.524 issued rwts: total=1849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.524 job7: (groupid=0, jobs=1): err= 0: pid=438515: Mon Dec 16 12:48:42 2024 00:27:18.524 read: IOPS=138, BW=34.6MiB/s (36.3MB/s)(351MiB/10142msec) 00:27:18.524 slat (usec): min=20, max=437061, avg=6793.10, stdev=27328.84 00:27:18.524 clat (msec): min=21, max=946, avg=455.05, stdev=261.96 00:27:18.524 lat (msec): min=21, max=948, avg=461.84, stdev=266.44 00:27:18.524 clat percentiles (msec): 00:27:18.524 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 70], 00:27:18.524 | 30.00th=[ 397], 40.00th=[ 477], 50.00th=[ 523], 60.00th=[ 575], 00:27:18.525 | 70.00th=[ 634], 80.00th=[ 693], 90.00th=[ 751], 95.00th=[ 776], 00:27:18.525 | 99.00th=[ 827], 99.50th=[ 885], 99.90th=[ 936], 99.95th=[ 944], 00:27:18.525 | 99.99th=[ 944] 00:27:18.525 bw ( KiB/s): min= 8704, max=201216, per=4.44%, avg=34329.60, stdev=39807.09, samples=20 00:27:18.525 iops : min= 34, max= 786, avg=134.10, stdev=155.50, samples=20 00:27:18.525 lat (msec) : 50=13.89%, 100=9.62%, 250=3.92%, 500=16.81%, 750=47.51% 00:27:18.525 lat (msec) : 1000=8.26% 00:27:18.525 cpu : usr=0.04%, sys=0.64%, ctx=200, majf=0, minf=4097 00:27:18.525 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:27:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.525 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.525 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.525 job8: (groupid=0, jobs=1): err= 0: pid=438516: Mon Dec 16 12:48:42 2024 00:27:18.525 read: IOPS=227, BW=57.0MiB/s (59.8MB/s)(578MiB/10143msec) 00:27:18.525 slat (usec): min=14, max=252746, avg=3107.31, stdev=16513.06 00:27:18.525 clat (msec): min=2, max=1006, avg=277.35, stdev=202.48 00:27:18.525 lat (msec): min=2, max=1096, avg=280.46, stdev=204.70 00:27:18.525 clat percentiles (msec): 00:27:18.525 | 1.00th=[ 5], 5.00th=[ 41], 10.00th=[ 52], 20.00th=[ 86], 00:27:18.525 | 30.00th=[ 140], 40.00th=[ 180], 50.00th=[ 243], 60.00th=[ 309], 00:27:18.525 | 70.00th=[ 363], 80.00th=[ 443], 90.00th=[ 567], 95.00th=[ 701], 00:27:18.525 | 99.00th=[ 844], 99.50th=[ 860], 99.90th=[ 911], 99.95th=[ 911], 00:27:18.525 | 99.99th=[ 1003] 00:27:18.525 bw ( KiB/s): min=22528, max=123904, per=7.44%, avg=57552.05, stdev=30605.38, samples=20 00:27:18.525 iops : min= 88, max= 484, avg=224.80, stdev=119.56, samples=20 00:27:18.525 lat (msec) : 4=0.43%, 10=2.25%, 50=6.96%, 100=14.66%, 250=26.56% 00:27:18.525 lat (msec) : 500=36.51%, 750=9.34%, 1000=3.24%, 2000=0.04% 00:27:18.525 cpu : usr=0.08%, sys=0.92%, ctx=395, majf=0, minf=4097 00:27:18.525 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:27:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.525 issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.525 job9: (groupid=0, jobs=1): err= 0: pid=438517: Mon Dec 16 12:48:42 2024 00:27:18.525 read: IOPS=106, BW=26.7MiB/s (28.0MB/s)(271MiB/10152msec) 00:27:18.525 slat (usec): min=20, max=261891, avg=9160.44, stdev=29245.08 00:27:18.525 clat (msec): min=26, max=974, avg=589.53, stdev=158.55 00:27:18.525 lat (msec): min=26, max=1011, avg=598.69, stdev=161.25 00:27:18.525 clat percentiles (msec): 00:27:18.525 | 1.00th=[ 45], 5.00th=[ 351], 10.00th=[ 430], 20.00th=[ 481], 00:27:18.525 | 30.00th=[ 531], 40.00th=[ 558], 50.00th=[ 584], 60.00th=[ 617], 00:27:18.525 | 70.00th=[ 651], 80.00th=[ 709], 90.00th=[ 793], 95.00th=[ 860], 00:27:18.525 | 99.00th=[ 944], 99.50th=[ 969], 99.90th=[ 978], 99.95th=[ 978], 00:27:18.525 | 99.99th=[ 978] 00:27:18.525 bw ( KiB/s): min=14848, max=34816, per=3.37%, avg=26112.00, stdev=5828.24, samples=20 00:27:18.525 iops : min= 58, max= 136, avg=102.00, stdev=22.77, samples=20 00:27:18.525 lat (msec) : 50=1.57%, 250=1.29%, 500=19.28%, 750=62.64%, 1000=15.22% 00:27:18.525 cpu : usr=0.05%, sys=0.54%, ctx=151, majf=0, minf=4097 00:27:18.525 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:27:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.525 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.525 issued rwts: total=1084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.525 job10: (groupid=0, jobs=1): err= 0: pid=438518: Mon Dec 16 12:48:42 2024 00:27:18.525 read: IOPS=199, BW=49.9MiB/s (52.3MB/s)(504MiB/10102msec) 00:27:18.525 slat (usec): min=16, max=456106, avg=2981.80, stdev=17699.71 00:27:18.525 clat (usec): min=1866, max=1143.0k, avg=317555.11, stdev=229790.40 00:27:18.525 lat (msec): min=2, max=1143, avg=320.54, stdev=231.40 00:27:18.525 clat percentiles (msec): 00:27:18.525 | 1.00th=[ 39], 5.00th=[ 54], 10.00th=[ 75], 20.00th=[ 118], 00:27:18.525 | 30.00th=[ 180], 40.00th=[ 207], 50.00th=[ 228], 60.00th=[ 317], 00:27:18.525 | 70.00th=[ 405], 80.00th=[ 485], 90.00th=[ 684], 95.00th=[ 802], 00:27:18.525 | 99.00th=[ 1045], 99.50th=[ 1062], 99.90th=[ 1062], 99.95th=[ 1062], 00:27:18.525 | 99.99th=[ 1150] 00:27:18.525 bw ( KiB/s): min=14848, max=111616, per=6.46%, avg=49971.20, stdev=24490.18, samples=20 00:27:18.525 iops : min= 58, max= 436, avg=195.20, stdev=95.66, samples=20 00:27:18.525 lat (msec) : 2=0.05%, 4=0.20%, 10=0.20%, 50=3.77%, 100=13.10% 00:27:18.525 lat (msec) : 250=35.19%, 500=29.43%, 750=10.47%, 1000=6.35%, 2000=1.24% 00:27:18.525 cpu : usr=0.10%, sys=0.83%, ctx=499, majf=0, minf=4097 00:27:18.525 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.525 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.525 issued rwts: total=2015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.525 00:27:18.525 Run status group 0 (all jobs): 00:27:18.525 READ: bw=756MiB/s (793MB/s), 26.7MiB/s-134MiB/s (28.0MB/s-141MB/s), io=7678MiB (8050MB), run=10030-10157msec 00:27:18.525 00:27:18.525 Disk stats (read/write): 00:27:18.525 nvme0n1: ios=8842/0, merge=0/0, ticks=1244922/0, in_queue=1244922, util=97.47% 00:27:18.525 nvme10n1: ios=9550/0, merge=0/0, ticks=1213817/0, in_queue=1213817, util=97.66% 00:27:18.525 nvme11n1: ios=2256/0, merge=0/0, ticks=1210146/0, in_queue=1210146, util=97.76% 00:27:18.525 nvme2n1: ios=5915/0, merge=0/0, ticks=1219764/0, in_queue=1219764, util=97.88% 00:27:18.525 nvme3n1: ios=10765/0, merge=0/0, ticks=1214885/0, in_queue=1214885, util=97.95% 00:27:18.525 nvme4n1: ios=5927/0, merge=0/0, ticks=1230305/0, in_queue=1230305, util=98.31% 00:27:18.525 nvme5n1: ios=3558/0, merge=0/0, ticks=1227169/0, in_queue=1227169, util=98.46% 00:27:18.525 nvme6n1: ios=2627/0, merge=0/0, ticks=1238751/0, in_queue=1238751, util=98.54% 00:27:18.525 nvme7n1: ios=4486/0, merge=0/0, ticks=1226611/0, in_queue=1226611, util=98.95% 00:27:18.525 nvme8n1: ios=2014/0, merge=0/0, ticks=1209708/0, in_queue=1209708, util=99.11% 00:27:18.525 nvme9n1: ios=3876/0, merge=0/0, ticks=1224135/0, in_queue=1224135, util=99.25% 00:27:18.525 12:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:18.525 [global] 00:27:18.525 thread=1 00:27:18.525 invalidate=1 00:27:18.525 rw=randwrite 00:27:18.525 time_based=1 00:27:18.525 runtime=10 00:27:18.525 ioengine=libaio 00:27:18.525 direct=1 00:27:18.525 bs=262144 00:27:18.525 iodepth=64 00:27:18.525 norandommap=1 00:27:18.525 numjobs=1 00:27:18.525 00:27:18.525 [job0] 00:27:18.525 filename=/dev/nvme0n1 00:27:18.525 [job1] 00:27:18.525 filename=/dev/nvme10n1 00:27:18.525 [job2] 00:27:18.525 filename=/dev/nvme11n1 00:27:18.525 [job3] 00:27:18.525 filename=/dev/nvme2n1 00:27:18.525 [job4] 00:27:18.525 filename=/dev/nvme3n1 00:27:18.525 [job5] 00:27:18.525 filename=/dev/nvme4n1 00:27:18.525 [job6] 00:27:18.525 filename=/dev/nvme5n1 00:27:18.525 [job7] 00:27:18.525 filename=/dev/nvme6n1 00:27:18.525 [job8] 00:27:18.525 filename=/dev/nvme7n1 00:27:18.525 [job9] 00:27:18.525 filename=/dev/nvme8n1 00:27:18.525 [job10] 00:27:18.525 filename=/dev/nvme9n1 00:27:18.525 Could not set queue depth (nvme0n1) 00:27:18.525 Could not set queue depth (nvme10n1) 00:27:18.525 Could not set queue depth (nvme11n1) 00:27:18.525 Could not set queue depth (nvme2n1) 00:27:18.525 Could not set queue depth (nvme3n1) 00:27:18.525 Could not set queue depth (nvme4n1) 00:27:18.525 Could not set queue depth (nvme5n1) 00:27:18.525 Could not set queue depth (nvme6n1) 00:27:18.525 Could not set queue depth (nvme7n1) 00:27:18.525 Could not set queue depth (nvme8n1) 00:27:18.525 Could not set queue depth (nvme9n1) 00:27:18.525 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.525 fio-3.35 00:27:18.525 Starting 11 threads 00:27:28.514 00:27:28.514 job0: (groupid=0, jobs=1): err= 0: pid=439529: Mon Dec 16 12:48:54 2024 00:27:28.514 write: IOPS=282, BW=70.6MiB/s (74.1MB/s)(719MiB/10174msec); 0 zone resets 00:27:28.514 slat (usec): min=24, max=61697, avg=2365.08, stdev=7058.29 00:27:28.514 clat (usec): min=952, max=665445, avg=224002.14, stdev=155442.54 00:27:28.514 lat (usec): min=1005, max=674475, avg=226367.22, stdev=157147.97 00:27:28.514 clat percentiles (msec): 00:27:28.514 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 41], 20.00th=[ 65], 00:27:28.514 | 30.00th=[ 130], 40.00th=[ 171], 50.00th=[ 211], 60.00th=[ 247], 00:27:28.514 | 70.00th=[ 279], 80.00th=[ 351], 90.00th=[ 430], 95.00th=[ 550], 00:27:28.514 | 99.00th=[ 642], 99.50th=[ 651], 99.90th=[ 659], 99.95th=[ 667], 00:27:28.514 | 99.99th=[ 667] 00:27:28.514 bw ( KiB/s): min=28672, max=173056, per=6.72%, avg=71987.20, stdev=34672.41, samples=20 00:27:28.514 iops : min= 112, max= 676, avg=281.20, stdev=135.44, samples=20 00:27:28.514 lat (usec) : 1000=0.03% 00:27:28.514 lat (msec) : 2=0.42%, 4=0.63%, 10=2.43%, 20=2.16%, 50=9.60% 00:27:28.514 lat (msec) : 100=10.23%, 250=35.58%, 500=32.66%, 750=6.26% 00:27:28.514 cpu : usr=0.80%, sys=0.87%, ctx=1772, majf=0, minf=1 00:27:28.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:28.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.514 issued rwts: total=0,2875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.514 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.514 job1: (groupid=0, jobs=1): err= 0: pid=439541: Mon Dec 16 12:48:54 2024 00:27:28.514 write: IOPS=335, BW=83.9MiB/s (87.9MB/s)(855MiB/10190msec); 0 zone resets 00:27:28.514 slat (usec): min=21, max=89185, avg=2077.79, stdev=6497.71 00:27:28.514 clat (usec): min=745, max=874913, avg=188577.30, stdev=173430.33 00:27:28.514 lat (usec): min=786, max=881285, avg=190655.09, stdev=175235.35 00:27:28.515 clat percentiles (msec): 00:27:28.515 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 24], 20.00th=[ 42], 00:27:28.515 | 30.00th=[ 68], 40.00th=[ 102], 50.00th=[ 144], 60.00th=[ 178], 00:27:28.515 | 70.00th=[ 249], 80.00th=[ 275], 90.00th=[ 456], 95.00th=[ 558], 00:27:28.515 | 99.00th=[ 743], 99.50th=[ 802], 99.90th=[ 860], 99.95th=[ 869], 00:27:28.515 | 99.99th=[ 877] 00:27:28.515 bw ( KiB/s): min=32768, max=219648, per=8.02%, avg=85870.70, stdev=45954.16, samples=20 00:27:28.515 iops : min= 128, max= 858, avg=335.40, stdev=179.51, samples=20 00:27:28.515 lat (usec) : 750=0.03%, 1000=0.03% 00:27:28.515 lat (msec) : 2=0.41%, 4=0.29%, 10=2.66%, 20=4.65%, 50=16.38% 00:27:28.515 lat (msec) : 100=14.75%, 250=31.36%, 500=21.97%, 750=6.70%, 1000=0.76% 00:27:28.515 cpu : usr=0.79%, sys=1.03%, ctx=2039, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,3418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job2: (groupid=0, jobs=1): err= 0: pid=439542: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=239, BW=59.9MiB/s (62.8MB/s)(611MiB/10192msec); 0 zone resets 00:27:28.515 slat (usec): min=27, max=191205, avg=3495.72, stdev=9377.07 00:27:28.515 clat (usec): min=1142, max=682589, avg=263399.62, stdev=164740.44 00:27:28.515 lat (usec): min=1218, max=682635, avg=266895.34, stdev=167133.88 00:27:28.515 clat percentiles (msec): 00:27:28.515 | 1.00th=[ 4], 5.00th=[ 29], 10.00th=[ 61], 20.00th=[ 153], 00:27:28.515 | 30.00th=[ 169], 40.00th=[ 205], 50.00th=[ 241], 60.00th=[ 262], 00:27:28.515 | 70.00th=[ 284], 80.00th=[ 388], 90.00th=[ 558], 95.00th=[ 625], 00:27:28.515 | 99.00th=[ 659], 99.50th=[ 676], 99.90th=[ 684], 99.95th=[ 684], 00:27:28.515 | 99.99th=[ 684] 00:27:28.515 bw ( KiB/s): min=22528, max=123904, per=5.69%, avg=60876.80, stdev=27146.88, samples=20 00:27:28.515 iops : min= 88, max= 484, avg=237.80, stdev=106.04, samples=20 00:27:28.515 lat (msec) : 2=0.33%, 4=1.23%, 10=2.38%, 20=0.41%, 50=4.10% 00:27:28.515 lat (msec) : 100=6.10%, 250=38.82%, 500=35.59%, 750=11.06% 00:27:28.515 cpu : usr=0.60%, sys=0.90%, ctx=1049, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,2442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job3: (groupid=0, jobs=1): err= 0: pid=439543: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=491, BW=123MiB/s (129MB/s)(1235MiB/10044msec); 0 zone resets 00:27:28.515 slat (usec): min=21, max=85472, avg=1340.52, stdev=5020.09 00:27:28.515 clat (usec): min=639, max=720591, avg=128677.59, stdev=147490.56 00:27:28.515 lat (usec): min=670, max=730451, avg=130018.10, stdev=149066.79 00:27:28.515 clat percentiles (usec): 00:27:28.515 | 1.00th=[ 1844], 5.00th=[ 5080], 10.00th=[ 13042], 20.00th=[ 28705], 00:27:28.515 | 30.00th=[ 48497], 40.00th=[ 58459], 50.00th=[ 62653], 60.00th=[ 94897], 00:27:28.515 | 70.00th=[119014], 80.00th=[210764], 90.00th=[341836], 95.00th=[480248], 00:27:28.515 | 99.00th=[675283], 99.50th=[700449], 99.90th=[708838], 99.95th=[717226], 00:27:28.515 | 99.99th=[717226] 00:27:28.515 bw ( KiB/s): min=33280, max=351744, per=11.67%, avg=124876.80, stdev=92321.68, samples=20 00:27:28.515 iops : min= 130, max= 1374, avg=487.80, stdev=360.63, samples=20 00:27:28.515 lat (usec) : 750=0.06%, 1000=0.18% 00:27:28.515 lat (msec) : 2=1.01%, 4=2.45%, 10=4.74%, 20=5.26%, 50=17.39% 00:27:28.515 lat (msec) : 100=32.42%, 250=19.59%, 500=12.53%, 750=4.37% 00:27:28.515 cpu : usr=1.11%, sys=1.76%, ctx=3021, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,4941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job4: (groupid=0, jobs=1): err= 0: pid=439544: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=345, BW=86.3MiB/s (90.5MB/s)(874MiB/10125msec); 0 zone resets 00:27:28.515 slat (usec): min=23, max=115292, avg=2000.03, stdev=6925.71 00:27:28.515 clat (usec): min=656, max=696597, avg=183235.73, stdev=165215.13 00:27:28.515 lat (usec): min=711, max=696643, avg=185235.76, stdev=167375.91 00:27:28.515 clat percentiles (usec): 00:27:28.515 | 1.00th=[ 1614], 5.00th=[ 4621], 10.00th=[ 11207], 20.00th=[ 30802], 00:27:28.515 | 30.00th=[ 62653], 40.00th=[ 93848], 50.00th=[147850], 60.00th=[185598], 00:27:28.515 | 70.00th=[252707], 80.00th=[295699], 90.00th=[434111], 95.00th=[557843], 00:27:28.515 | 99.00th=[624952], 99.50th=[658506], 99.90th=[658506], 99.95th=[692061], 00:27:28.515 | 99.99th=[700449] 00:27:28.515 bw ( KiB/s): min=26624, max=264704, per=8.21%, avg=87894.50, stdev=63616.80, samples=20 00:27:28.515 iops : min= 104, max= 1034, avg=343.30, stdev=248.50, samples=20 00:27:28.515 lat (usec) : 750=0.06%, 1000=0.31% 00:27:28.515 lat (msec) : 2=0.89%, 4=3.09%, 10=4.78%, 20=5.06%, 50=11.98% 00:27:28.515 lat (msec) : 100=15.56%, 250=27.91%, 500=23.42%, 750=6.95% 00:27:28.515 cpu : usr=0.64%, sys=1.31%, ctx=2337, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,3497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job5: (groupid=0, jobs=1): err= 0: pid=439545: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=484, BW=121MiB/s (127MB/s)(1226MiB/10125msec); 0 zone resets 00:27:28.515 slat (usec): min=20, max=122513, avg=1325.06, stdev=4655.35 00:27:28.515 clat (usec): min=615, max=824494, avg=130739.73, stdev=132129.17 00:27:28.515 lat (usec): min=654, max=828281, avg=132064.79, stdev=133198.04 00:27:28.515 clat percentiles (usec): 00:27:28.515 | 1.00th=[ 889], 5.00th=[ 3261], 10.00th=[ 17171], 20.00th=[ 43254], 00:27:28.515 | 30.00th=[ 46924], 40.00th=[ 52691], 50.00th=[ 60031], 60.00th=[102237], 00:27:28.515 | 70.00th=[152044], 80.00th=[248513], 90.00th=[299893], 95.00th=[392168], 00:27:28.515 | 99.00th=[574620], 99.50th=[683672], 99.90th=[809501], 99.95th=[817890], 00:27:28.515 | 99.99th=[826278] 00:27:28.515 bw ( KiB/s): min=45056, max=354816, per=11.58%, avg=123935.75, stdev=86363.56, samples=20 00:27:28.515 iops : min= 176, max= 1386, avg=484.10, stdev=337.38, samples=20 00:27:28.515 lat (usec) : 750=0.59%, 1000=0.59% 00:27:28.515 lat (msec) : 2=2.02%, 4=2.81%, 10=2.04%, 20=2.53%, 50=24.49% 00:27:28.515 lat (msec) : 100=23.67%, 250=21.63%, 500=17.74%, 750=1.57%, 1000=0.33% 00:27:28.515 cpu : usr=0.88%, sys=1.62%, ctx=2662, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,4905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job6: (groupid=0, jobs=1): err= 0: pid=439546: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=542, BW=136MiB/s (142MB/s)(1373MiB/10122msec); 0 zone resets 00:27:28.515 slat (usec): min=23, max=222846, avg=1545.35, stdev=7027.69 00:27:28.515 clat (usec): min=878, max=820993, avg=116051.81, stdev=138075.21 00:27:28.515 lat (usec): min=941, max=821056, avg=117597.15, stdev=139639.33 00:27:28.515 clat percentiles (msec): 00:27:28.515 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 41], 00:27:28.515 | 30.00th=[ 43], 40.00th=[ 52], 50.00th=[ 75], 60.00th=[ 96], 00:27:28.515 | 70.00th=[ 113], 80.00th=[ 148], 90.00th=[ 251], 95.00th=[ 351], 00:27:28.515 | 99.00th=[ 776], 99.50th=[ 793], 99.90th=[ 818], 99.95th=[ 818], 00:27:28.515 | 99.99th=[ 818] 00:27:28.515 bw ( KiB/s): min=18432, max=327680, per=12.98%, avg=138988.10, stdev=96210.69, samples=20 00:27:28.515 iops : min= 72, max= 1280, avg=542.90, stdev=375.84, samples=20 00:27:28.515 lat (usec) : 1000=0.09% 00:27:28.515 lat (msec) : 2=0.31%, 4=0.98%, 10=2.86%, 20=4.37%, 50=30.10% 00:27:28.515 lat (msec) : 100=24.20%, 250=27.08%, 500=6.39%, 750=2.08%, 1000=1.55% 00:27:28.515 cpu : usr=1.27%, sys=1.70%, ctx=2435, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,5492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job7: (groupid=0, jobs=1): err= 0: pid=439547: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=552, BW=138MiB/s (145MB/s)(1407MiB/10186msec); 0 zone resets 00:27:28.515 slat (usec): min=19, max=36701, avg=1667.68, stdev=4315.06 00:27:28.515 clat (msec): min=2, max=562, avg=114.11, stdev=100.14 00:27:28.515 lat (msec): min=2, max=562, avg=115.78, stdev=101.49 00:27:28.515 clat percentiles (msec): 00:27:28.515 | 1.00th=[ 19], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 38], 00:27:28.515 | 30.00th=[ 44], 40.00th=[ 59], 50.00th=[ 85], 60.00th=[ 102], 00:27:28.515 | 70.00th=[ 124], 80.00th=[ 163], 90.00th=[ 264], 95.00th=[ 355], 00:27:28.515 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 514], 99.95th=[ 542], 00:27:28.515 | 99.99th=[ 567] 00:27:28.515 bw ( KiB/s): min=34816, max=421376, per=13.31%, avg=142464.00, stdev=106255.00, samples=20 00:27:28.515 iops : min= 136, max= 1646, avg=556.50, stdev=415.06, samples=20 00:27:28.515 lat (msec) : 4=0.05%, 10=0.20%, 20=1.19%, 50=34.35%, 100=22.78% 00:27:28.515 lat (msec) : 250=30.26%, 500=11.05%, 750=0.12% 00:27:28.515 cpu : usr=1.17%, sys=1.85%, ctx=1815, majf=0, minf=1 00:27:28.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:28.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.515 issued rwts: total=0,5628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.515 job8: (groupid=0, jobs=1): err= 0: pid=439548: Mon Dec 16 12:48:54 2024 00:27:28.515 write: IOPS=327, BW=81.8MiB/s (85.8MB/s)(834MiB/10194msec); 0 zone resets 00:27:28.515 slat (usec): min=24, max=76900, avg=2281.87, stdev=6883.60 00:27:28.515 clat (usec): min=1316, max=689030, avg=193142.67, stdev=155290.66 00:27:28.516 lat (usec): min=1478, max=689092, avg=195424.55, stdev=157381.02 00:27:28.516 clat percentiles (msec): 00:27:28.516 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 26], 20.00th=[ 52], 00:27:28.516 | 30.00th=[ 83], 40.00th=[ 140], 50.00th=[ 163], 60.00th=[ 213], 00:27:28.516 | 70.00th=[ 251], 80.00th=[ 288], 90.00th=[ 409], 95.00th=[ 558], 00:27:28.516 | 99.00th=[ 659], 99.50th=[ 659], 99.90th=[ 676], 99.95th=[ 684], 00:27:28.516 | 99.99th=[ 693] 00:27:28.516 bw ( KiB/s): min=22528, max=182272, per=7.83%, avg=83788.80, stdev=50153.52, samples=20 00:27:28.516 iops : min= 88, max= 712, avg=327.30, stdev=195.91, samples=20 00:27:28.516 lat (msec) : 2=0.15%, 4=1.17%, 10=4.29%, 20=2.94%, 50=10.91% 00:27:28.516 lat (msec) : 100=16.36%, 250=34.13%, 500=23.61%, 750=6.44% 00:27:28.516 cpu : usr=0.65%, sys=1.11%, ctx=1937, majf=0, minf=1 00:27:28.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:28.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.516 issued rwts: total=0,3337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.516 job9: (groupid=0, jobs=1): err= 0: pid=439549: Mon Dec 16 12:48:54 2024 00:27:28.516 write: IOPS=266, BW=66.6MiB/s (69.8MB/s)(679MiB/10190msec); 0 zone resets 00:27:28.516 slat (usec): min=30, max=90994, avg=3151.92, stdev=7457.73 00:27:28.516 clat (msec): min=22, max=659, avg=237.03, stdev=149.64 00:27:28.516 lat (msec): min=22, max=665, avg=240.18, stdev=151.21 00:27:28.516 clat percentiles (msec): 00:27:28.516 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 63], 20.00th=[ 111], 00:27:28.516 | 30.00th=[ 153], 40.00th=[ 163], 50.00th=[ 207], 60.00th=[ 251], 00:27:28.516 | 70.00th=[ 275], 80.00th=[ 342], 90.00th=[ 477], 95.00th=[ 567], 00:27:28.516 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 651], 99.95th=[ 651], 00:27:28.516 | 99.99th=[ 659] 00:27:28.516 bw ( KiB/s): min=26624, max=233472, per=6.34%, avg=67872.95, stdev=47048.84, samples=20 00:27:28.516 iops : min= 104, max= 912, avg=265.10, stdev=183.78, samples=20 00:27:28.516 lat (msec) : 50=8.55%, 100=6.74%, 250=44.51%, 500=31.65%, 750=8.55% 00:27:28.516 cpu : usr=0.70%, sys=0.94%, ctx=824, majf=0, minf=1 00:27:28.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:28.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.516 issued rwts: total=0,2714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.516 job10: (groupid=0, jobs=1): err= 0: pid=439550: Mon Dec 16 12:48:54 2024 00:27:28.516 write: IOPS=334, BW=83.6MiB/s (87.7MB/s)(845MiB/10098msec); 0 zone resets 00:27:28.516 slat (usec): min=23, max=232862, avg=1724.15, stdev=7746.06 00:27:28.516 clat (usec): min=1014, max=746228, avg=189102.52, stdev=169307.73 00:27:28.516 lat (usec): min=1060, max=746278, avg=190826.67, stdev=171147.78 00:27:28.516 clat percentiles (msec): 00:27:28.516 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 22], 20.00th=[ 39], 00:27:28.516 | 30.00th=[ 50], 40.00th=[ 85], 50.00th=[ 157], 60.00th=[ 209], 00:27:28.516 | 70.00th=[ 255], 80.00th=[ 317], 90.00th=[ 451], 95.00th=[ 558], 00:27:28.516 | 99.00th=[ 634], 99.50th=[ 667], 99.90th=[ 726], 99.95th=[ 743], 00:27:28.516 | 99.99th=[ 743] 00:27:28.516 bw ( KiB/s): min=26624, max=172032, per=7.93%, avg=84869.10, stdev=47271.77, samples=20 00:27:28.516 iops : min= 104, max= 672, avg=331.50, stdev=184.67, samples=20 00:27:28.516 lat (msec) : 2=0.50%, 4=1.81%, 10=2.34%, 20=4.47%, 50=21.46% 00:27:28.516 lat (msec) : 100=12.64%, 250=25.61%, 500=23.65%, 750=7.52% 00:27:28.516 cpu : usr=0.79%, sys=1.15%, ctx=2406, majf=0, minf=1 00:27:28.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:27:28.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.516 issued rwts: total=0,3378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.516 00:27:28.516 Run status group 0 (all jobs): 00:27:28.516 WRITE: bw=1045MiB/s (1096MB/s), 59.9MiB/s-138MiB/s (62.8MB/s-145MB/s), io=10.4GiB (11.2GB), run=10044-10194msec 00:27:28.516 00:27:28.516 Disk stats (read/write): 00:27:28.516 nvme0n1: ios=49/5594, merge=0/0, ticks=48/1214068, in_queue=1214116, util=95.16% 00:27:28.516 nvme10n1: ios=43/6829, merge=0/0, ticks=1802/1244761, in_queue=1246563, util=100.00% 00:27:28.516 nvme11n1: ios=46/4874, merge=0/0, ticks=4406/1236569, in_queue=1240975, util=100.00% 00:27:28.516 nvme2n1: ios=42/9599, merge=0/0, ticks=987/1226381, in_queue=1227368, util=100.00% 00:27:28.516 nvme3n1: ios=0/6829, merge=0/0, ticks=0/1215872, in_queue=1215872, util=96.09% 00:27:28.516 nvme4n1: ios=0/9645, merge=0/0, ticks=0/1219483, in_queue=1219483, util=97.00% 00:27:28.516 nvme5n1: ios=37/10824, merge=0/0, ticks=565/1204069, in_queue=1204634, util=100.00% 00:27:28.516 nvme6n1: ios=0/11248, merge=0/0, ticks=0/1237768, in_queue=1237768, util=97.71% 00:27:28.516 nvme7n1: ios=0/6662, merge=0/0, ticks=0/1246704, in_queue=1246704, util=98.78% 00:27:28.516 nvme8n1: ios=0/5421, merge=0/0, ticks=0/1242366, in_queue=1242366, util=98.95% 00:27:28.516 nvme9n1: ios=32/6544, merge=0/0, ticks=424/1223356, in_queue=1223780, util=100.00% 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:28.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.516 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:28.775 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.775 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:29.034 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:29.034 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:29.034 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.034 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.034 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:29.034 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.034 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.292 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:29.550 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.551 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:29.809 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:29.809 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:29.809 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.809 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.809 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.810 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:30.069 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:30.069 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:30.069 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.069 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.069 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:30.069 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:30.069 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.069 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:30.328 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:30.328 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.328 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:30.587 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.587 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:30.846 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:30.846 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.846 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.846 rmmod nvme_tcp 00:27:30.846 rmmod nvme_fabrics 00:27:30.846 rmmod nvme_keyring 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 431598 ']' 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 431598 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 431598 ']' 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 431598 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 431598 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 431598' 00:27:31.106 killing process with pid 431598 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 431598 00:27:31.106 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 431598 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.365 12:48:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.901 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.901 00:27:33.901 real 1m10.826s 00:27:33.901 user 4m17.189s 00:27:33.901 sys 0m16.588s 00:27:33.901 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:33.901 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.901 ************************************ 00:27:33.901 END TEST nvmf_multiconnection 00:27:33.901 ************************************ 00:27:33.901 12:48:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:33.901 12:48:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:33.902 ************************************ 00:27:33.902 START TEST nvmf_initiator_timeout 00:27:33.902 ************************************ 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:33.902 * Looking for test storage... 00:27:33.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.902 --rc genhtml_branch_coverage=1 00:27:33.902 --rc genhtml_function_coverage=1 00:27:33.902 --rc genhtml_legend=1 00:27:33.902 --rc geninfo_all_blocks=1 00:27:33.902 --rc geninfo_unexecuted_blocks=1 00:27:33.902 00:27:33.902 ' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.902 --rc genhtml_branch_coverage=1 00:27:33.902 --rc genhtml_function_coverage=1 00:27:33.902 --rc genhtml_legend=1 00:27:33.902 --rc geninfo_all_blocks=1 00:27:33.902 --rc geninfo_unexecuted_blocks=1 00:27:33.902 00:27:33.902 ' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.902 --rc genhtml_branch_coverage=1 00:27:33.902 --rc genhtml_function_coverage=1 00:27:33.902 --rc genhtml_legend=1 00:27:33.902 --rc geninfo_all_blocks=1 00:27:33.902 --rc geninfo_unexecuted_blocks=1 00:27:33.902 00:27:33.902 ' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.902 --rc genhtml_branch_coverage=1 00:27:33.902 --rc genhtml_function_coverage=1 00:27:33.902 --rc genhtml_legend=1 00:27:33.902 --rc geninfo_all_blocks=1 00:27:33.902 --rc geninfo_unexecuted_blocks=1 00:27:33.902 00:27:33.902 ' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.902 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:33.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.903 12:48:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:40.476 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:40.476 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:40.476 Found net devices under 0000:af:00.0: cvl_0_0 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:40.476 Found net devices under 0000:af:00.1: cvl_0_1 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.476 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:27:40.476 00:27:40.476 --- 10.0.0.2 ping statistics --- 00:27:40.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.477 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:40.477 00:27:40.477 --- 10.0.0.1 ping statistics --- 00:27:40.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.477 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=444642 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 444642 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 444642 ']' 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 [2024-12-16 12:49:05.752913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:40.477 [2024-12-16 12:49:05.752956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.477 [2024-12-16 12:49:05.825386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.477 [2024-12-16 12:49:05.866175] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.477 [2024-12-16 12:49:05.866216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.477 [2024-12-16 12:49:05.866223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.477 [2024-12-16 12:49:05.866229] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.477 [2024-12-16 12:49:05.866233] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.477 [2024-12-16 12:49:05.866292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.477 [2024-12-16 12:49:05.866315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.477 [2024-12-16 12:49:05.866406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.477 [2024-12-16 12:49:05.866407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.477 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 Malloc0 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 Delay0 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 [2024-12-16 12:49:06.044728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 [2024-12-16 12:49:06.070021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.477 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:41.414 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:41.414 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:41.414 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:41.414 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:41.414 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=445327 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:43.319 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:43.319 [global] 00:27:43.319 thread=1 00:27:43.319 invalidate=1 00:27:43.319 rw=write 00:27:43.319 time_based=1 00:27:43.319 runtime=60 00:27:43.319 ioengine=libaio 00:27:43.319 direct=1 00:27:43.319 bs=4096 00:27:43.319 iodepth=1 00:27:43.319 norandommap=0 00:27:43.319 numjobs=1 00:27:43.319 00:27:43.319 verify_dump=1 00:27:43.319 verify_backlog=512 00:27:43.319 verify_state_save=0 00:27:43.319 do_verify=1 00:27:43.319 verify=crc32c-intel 00:27:43.319 [job0] 00:27:43.319 filename=/dev/nvme0n1 00:27:43.319 Could not set queue depth (nvme0n1) 00:27:43.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:43.577 fio-3.35 00:27:43.577 Starting 1 thread 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.864 true 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.864 true 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.864 true 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:46.864 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.865 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.865 true 00:27:46.865 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.865 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.400 true 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.400 true 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.400 true 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.400 true 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:49.400 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 445327 00:28:45.637 00:28:45.637 job0: (groupid=0, jobs=1): err= 0: pid=445444: Mon Dec 16 12:50:09 2024 00:28:45.637 read: IOPS=280, BW=1123KiB/s (1150kB/s)(65.8MiB/60009msec) 00:28:45.637 slat (nsec): min=6685, max=40898, avg=8377.63, stdev=2328.08 00:28:45.637 clat (usec): min=180, max=41629k, avg=3348.03, stdev=320700.16 00:28:45.637 lat (usec): min=187, max=41629k, avg=3356.41, stdev=320700.29 00:28:45.637 clat percentiles (usec): 00:28:45.637 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:28:45.637 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:28:45.637 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:28:45.637 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:28:45.637 | 99.99th=[44827] 00:28:45.637 write: IOPS=281, BW=1126KiB/s (1153kB/s)(66.0MiB/60009msec); 0 zone resets 00:28:45.637 slat (usec): min=9, max=26674, avg=14.39, stdev=223.92 00:28:45.637 clat (usec): min=137, max=391, avg=183.24, stdev=22.50 00:28:45.637 lat (usec): min=147, max=27012, avg=197.64, stdev=226.76 00:28:45.637 clat percentiles (usec): 00:28:45.637 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:28:45.637 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:28:45.637 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 225], 00:28:45.637 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 314], 00:28:45.637 | 99.99th=[ 363] 00:28:45.637 bw ( KiB/s): min= 4096, max=10400, per=100.00%, avg=8448.00, stdev=1843.51, samples=16 00:28:45.637 iops : min= 1024, max= 2600, avg=2112.00, stdev=460.88, samples=16 00:28:45.637 lat (usec) : 250=87.29%, 500=11.91%, 750=0.01% 00:28:45.637 lat (msec) : 2=0.01%, 50=0.78%, >=2000=0.01% 00:28:45.637 cpu : usr=0.50%, sys=0.75%, ctx=33756, majf=0, minf=1 00:28:45.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:45.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.637 issued rwts: total=16853,16896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:45.637 00:28:45.637 Run status group 0 (all jobs): 00:28:45.637 READ: bw=1123KiB/s (1150kB/s), 1123KiB/s-1123KiB/s (1150kB/s-1150kB/s), io=65.8MiB (69.0MB), run=60009-60009msec 00:28:45.637 WRITE: bw=1126KiB/s (1153kB/s), 1126KiB/s-1126KiB/s (1153kB/s-1153kB/s), io=66.0MiB (69.2MB), run=60009-60009msec 00:28:45.637 00:28:45.637 Disk stats (read/write): 00:28:45.637 nvme0n1: ios=16948/16896, merge=0/0, ticks=16092/2900, in_queue=18992, util=99.88% 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:45.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:45.637 nvmf hotplug test: fio successful as expected 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.637 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.638 rmmod nvme_tcp 00:28:45.638 rmmod nvme_fabrics 00:28:45.638 rmmod nvme_keyring 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 444642 ']' 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 444642 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 444642 ']' 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 444642 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.638 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444642 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444642' 00:28:45.638 killing process with pid 444642 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 444642 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 444642 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.638 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.576 00:28:46.576 real 1m12.801s 00:28:46.576 user 4m22.529s 00:28:46.576 sys 0m7.185s 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:46.576 ************************************ 00:28:46.576 END TEST nvmf_initiator_timeout 00:28:46.576 ************************************ 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.576 12:50:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:51.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:51.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:51.857 Found net devices under 0000:af:00.0: cvl_0_0 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:51.857 Found net devices under 0000:af:00.1: cvl_0_1 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:51.857 ************************************ 00:28:51.857 START TEST nvmf_perf_adq 00:28:51.857 ************************************ 00:28:51.857 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:52.119 * Looking for test storage... 00:28:52.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.119 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:52.119 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:28:52.119 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.119 --rc genhtml_branch_coverage=1 00:28:52.119 --rc genhtml_function_coverage=1 00:28:52.119 --rc genhtml_legend=1 00:28:52.119 --rc geninfo_all_blocks=1 00:28:52.119 --rc geninfo_unexecuted_blocks=1 00:28:52.119 00:28:52.119 ' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.119 --rc genhtml_branch_coverage=1 00:28:52.119 --rc genhtml_function_coverage=1 00:28:52.119 --rc genhtml_legend=1 00:28:52.119 --rc geninfo_all_blocks=1 00:28:52.119 --rc geninfo_unexecuted_blocks=1 00:28:52.119 00:28:52.119 ' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.119 --rc genhtml_branch_coverage=1 00:28:52.119 --rc genhtml_function_coverage=1 00:28:52.119 --rc genhtml_legend=1 00:28:52.119 --rc geninfo_all_blocks=1 00:28:52.119 --rc geninfo_unexecuted_blocks=1 00:28:52.119 00:28:52.119 ' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.119 --rc genhtml_branch_coverage=1 00:28:52.119 --rc genhtml_function_coverage=1 00:28:52.119 --rc genhtml_legend=1 00:28:52.119 --rc geninfo_all_blocks=1 00:28:52.119 --rc geninfo_unexecuted_blocks=1 00:28:52.119 00:28:52.119 ' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:52.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.120 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:58.692 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:58.693 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:58.693 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:58.693 Found net devices under 0000:af:00.0: cvl_0_0 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:58.693 Found net devices under 0000:af:00.1: cvl_0_1 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:58.693 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:58.693 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:02.884 12:50:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:08.161 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:08.161 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:08.161 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:08.162 Found net devices under 0000:af:00.0: cvl_0_0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:08.162 Found net devices under 0000:af:00.1: cvl_0_1 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:29:08.162 00:29:08.162 --- 10.0.0.2 ping statistics --- 00:29:08.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.162 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:29:08.162 00:29:08.162 --- 10.0.0.1 ping statistics --- 00:29:08.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.162 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=463041 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 463041 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 463041 ']' 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 [2024-12-16 12:50:33.693397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:08.162 [2024-12-16 12:50:33.693444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.162 [2024-12-16 12:50:33.766650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.162 [2024-12-16 12:50:33.808805] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.162 [2024-12-16 12:50:33.808841] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.162 [2024-12-16 12:50:33.808848] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.162 [2024-12-16 12:50:33.808854] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.162 [2024-12-16 12:50:33.808859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.162 [2024-12-16 12:50:33.811134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.162 [2024-12-16 12:50:33.811163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.162 [2024-12-16 12:50:33.811293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.162 [2024-12-16 12:50:33.811294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.162 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:08.163 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.163 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 [2024-12-16 12:50:34.009352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 Malloc1 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 [2024-12-16 12:50:34.056411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=463171 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:08.163 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:10.068 "tick_rate": 2100000000, 00:29:10.068 "poll_groups": [ 00:29:10.068 { 00:29:10.068 "name": "nvmf_tgt_poll_group_000", 00:29:10.068 "admin_qpairs": 1, 00:29:10.068 "io_qpairs": 1, 00:29:10.068 "current_admin_qpairs": 1, 00:29:10.068 "current_io_qpairs": 1, 00:29:10.068 "pending_bdev_io": 0, 00:29:10.068 "completed_nvme_io": 19400, 00:29:10.068 "transports": [ 00:29:10.068 { 00:29:10.068 "trtype": "TCP" 00:29:10.068 } 00:29:10.068 ] 00:29:10.068 }, 00:29:10.068 { 00:29:10.068 "name": "nvmf_tgt_poll_group_001", 00:29:10.068 "admin_qpairs": 0, 00:29:10.068 "io_qpairs": 1, 00:29:10.068 "current_admin_qpairs": 0, 00:29:10.068 "current_io_qpairs": 1, 00:29:10.068 "pending_bdev_io": 0, 00:29:10.068 "completed_nvme_io": 19565, 00:29:10.068 "transports": [ 00:29:10.068 { 00:29:10.068 "trtype": "TCP" 00:29:10.068 } 00:29:10.068 ] 00:29:10.068 }, 00:29:10.068 { 00:29:10.068 "name": "nvmf_tgt_poll_group_002", 00:29:10.068 "admin_qpairs": 0, 00:29:10.068 "io_qpairs": 1, 00:29:10.068 "current_admin_qpairs": 0, 00:29:10.068 "current_io_qpairs": 1, 00:29:10.068 "pending_bdev_io": 0, 00:29:10.068 "completed_nvme_io": 19225, 00:29:10.068 "transports": [ 00:29:10.068 { 00:29:10.068 "trtype": "TCP" 00:29:10.068 } 00:29:10.068 ] 00:29:10.068 }, 00:29:10.068 { 00:29:10.068 "name": "nvmf_tgt_poll_group_003", 00:29:10.068 "admin_qpairs": 0, 00:29:10.068 "io_qpairs": 1, 00:29:10.068 "current_admin_qpairs": 0, 00:29:10.068 "current_io_qpairs": 1, 00:29:10.068 "pending_bdev_io": 0, 00:29:10.068 "completed_nvme_io": 19504, 00:29:10.068 "transports": [ 00:29:10.068 { 00:29:10.068 "trtype": "TCP" 00:29:10.068 } 00:29:10.068 ] 00:29:10.068 } 00:29:10.068 ] 00:29:10.068 }' 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:10.068 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 463171 00:29:18.188 Initializing NVMe Controllers 00:29:18.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:18.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:18.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:18.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:18.188 Initialization complete. Launching workers. 00:29:18.188 ======================================================== 00:29:18.188 Latency(us) 00:29:18.188 Device Information : IOPS MiB/s Average min max 00:29:18.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10251.73 40.05 6242.35 2080.33 11208.25 00:29:18.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10509.73 41.05 6090.60 2339.55 10118.60 00:29:18.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10267.33 40.11 6234.54 2244.70 10534.70 00:29:18.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10320.03 40.31 6202.74 2272.70 10453.13 00:29:18.188 ======================================================== 00:29:18.188 Total : 41348.81 161.52 6191.96 2080.33 11208.25 00:29:18.188 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.188 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.188 rmmod nvme_tcp 00:29:18.188 rmmod nvme_fabrics 00:29:18.448 rmmod nvme_keyring 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 463041 ']' 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 463041 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 463041 ']' 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 463041 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 463041 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 463041' 00:29:18.448 killing process with pid 463041 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 463041 00:29:18.448 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 463041 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.707 12:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.615 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.615 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:20.615 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:20.615 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:21.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:24.531 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:29.807 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:29.807 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:29.807 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:29.808 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:29.808 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:29.808 Found net devices under 0000:af:00.0: cvl_0_0 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:29.808 Found net devices under 0000:af:00.1: cvl_0_1 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.808 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.09 ms 00:29:29.808 00:29:29.808 --- 10.0.0.2 ping statistics --- 00:29:29.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.809 rtt min/avg/max/mdev = 1.085/1.085/1.085/0.000 ms 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:29.809 00:29:29.809 --- 10.0.0.1 ping statistics --- 00:29:29.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.809 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:29.809 net.core.busy_poll = 1 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:29.809 net.core.busy_read = 1 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=466957 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 466957 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 466957 ']' 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:29.809 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.068 [2024-12-16 12:50:55.890104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:30.068 [2024-12-16 12:50:55.890153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.068 [2024-12-16 12:50:55.962705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.068 [2024-12-16 12:50:56.003531] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.068 [2024-12-16 12:50:56.003571] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.068 [2024-12-16 12:50:56.003579] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.068 [2024-12-16 12:50:56.003585] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.068 [2024-12-16 12:50:56.003591] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.068 [2024-12-16 12:50:56.003645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.068 [2024-12-16 12:50:56.003755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.068 [2024-12-16 12:50:56.003861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.068 [2024-12-16 12:50:56.003863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.068 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.327 [2024-12-16 12:50:56.218606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.327 Malloc1 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.327 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.327 [2024-12-16 12:50:56.265945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.328 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.328 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=466983 00:29:30.328 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:30.328 12:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:32.233 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:32.233 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.233 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:32.233 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.233 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:32.233 "tick_rate": 2100000000, 00:29:32.233 "poll_groups": [ 00:29:32.233 { 00:29:32.233 "name": "nvmf_tgt_poll_group_000", 00:29:32.233 "admin_qpairs": 1, 00:29:32.233 "io_qpairs": 4, 00:29:32.233 "current_admin_qpairs": 1, 00:29:32.233 "current_io_qpairs": 4, 00:29:32.233 "pending_bdev_io": 0, 00:29:32.233 "completed_nvme_io": 44080, 00:29:32.233 "transports": [ 00:29:32.233 { 00:29:32.233 "trtype": "TCP" 00:29:32.233 } 00:29:32.233 ] 00:29:32.233 }, 00:29:32.233 { 00:29:32.233 "name": "nvmf_tgt_poll_group_001", 00:29:32.233 "admin_qpairs": 0, 00:29:32.233 "io_qpairs": 0, 00:29:32.233 "current_admin_qpairs": 0, 00:29:32.233 "current_io_qpairs": 0, 00:29:32.233 "pending_bdev_io": 0, 00:29:32.233 "completed_nvme_io": 0, 00:29:32.233 "transports": [ 00:29:32.233 { 00:29:32.233 "trtype": "TCP" 00:29:32.233 } 00:29:32.233 ] 00:29:32.233 }, 00:29:32.233 { 00:29:32.233 "name": "nvmf_tgt_poll_group_002", 00:29:32.233 "admin_qpairs": 0, 00:29:32.233 "io_qpairs": 0, 00:29:32.233 "current_admin_qpairs": 0, 00:29:32.233 "current_io_qpairs": 0, 00:29:32.233 "pending_bdev_io": 0, 00:29:32.233 "completed_nvme_io": 0, 00:29:32.233 "transports": [ 00:29:32.233 { 00:29:32.233 "trtype": "TCP" 00:29:32.233 } 00:29:32.233 ] 00:29:32.233 }, 00:29:32.233 { 00:29:32.233 "name": "nvmf_tgt_poll_group_003", 00:29:32.233 "admin_qpairs": 0, 00:29:32.233 "io_qpairs": 0, 00:29:32.233 "current_admin_qpairs": 0, 00:29:32.233 "current_io_qpairs": 0, 00:29:32.233 "pending_bdev_io": 0, 00:29:32.233 "completed_nvme_io": 0, 00:29:32.233 "transports": [ 00:29:32.233 { 00:29:32.233 "trtype": "TCP" 00:29:32.233 } 00:29:32.233 ] 00:29:32.233 } 00:29:32.233 ] 00:29:32.233 }' 00:29:32.492 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:32.492 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:32.492 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:29:32.492 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:29:32.492 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 466983 00:29:40.614 Initializing NVMe Controllers 00:29:40.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:40.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:40.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:40.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:40.614 Initialization complete. Launching workers. 00:29:40.614 ======================================================== 00:29:40.614 Latency(us) 00:29:40.614 Device Information : IOPS MiB/s Average min max 00:29:40.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6158.70 24.06 10392.29 1460.95 55054.77 00:29:40.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5147.80 20.11 12432.52 1471.04 55680.48 00:29:40.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6084.10 23.77 10549.81 1339.17 55826.74 00:29:40.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5830.50 22.78 10994.42 1327.16 55554.89 00:29:40.614 ======================================================== 00:29:40.614 Total : 23221.10 90.71 11037.04 1327.16 55826.74 00:29:40.614 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.614 rmmod nvme_tcp 00:29:40.614 rmmod nvme_fabrics 00:29:40.614 rmmod nvme_keyring 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 466957 ']' 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 466957 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 466957 ']' 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 466957 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466957 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466957' 00:29:40.614 killing process with pid 466957 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 466957 00:29:40.614 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 466957 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.874 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:44.166 00:29:44.166 real 0m51.994s 00:29:44.166 user 2m44.014s 00:29:44.166 sys 0m11.137s 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.166 ************************************ 00:29:44.166 END TEST nvmf_perf_adq 00:29:44.166 ************************************ 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:44.166 ************************************ 00:29:44.166 START TEST nvmf_shutdown 00:29:44.166 ************************************ 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:44.166 * Looking for test storage... 00:29:44.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:44.166 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.166 --rc genhtml_branch_coverage=1 00:29:44.166 --rc genhtml_function_coverage=1 00:29:44.166 --rc genhtml_legend=1 00:29:44.166 --rc geninfo_all_blocks=1 00:29:44.166 --rc geninfo_unexecuted_blocks=1 00:29:44.166 00:29:44.166 ' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.166 --rc genhtml_branch_coverage=1 00:29:44.166 --rc genhtml_function_coverage=1 00:29:44.166 --rc genhtml_legend=1 00:29:44.166 --rc geninfo_all_blocks=1 00:29:44.166 --rc geninfo_unexecuted_blocks=1 00:29:44.166 00:29:44.166 ' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.166 --rc genhtml_branch_coverage=1 00:29:44.166 --rc genhtml_function_coverage=1 00:29:44.166 --rc genhtml_legend=1 00:29:44.166 --rc geninfo_all_blocks=1 00:29:44.166 --rc geninfo_unexecuted_blocks=1 00:29:44.166 00:29:44.166 ' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.166 --rc genhtml_branch_coverage=1 00:29:44.166 --rc genhtml_function_coverage=1 00:29:44.166 --rc genhtml_legend=1 00:29:44.166 --rc geninfo_all_blocks=1 00:29:44.166 --rc geninfo_unexecuted_blocks=1 00:29:44.166 00:29:44.166 ' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.166 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.167 ************************************ 00:29:44.167 START TEST nvmf_shutdown_tc1 00:29:44.167 ************************************ 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.167 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.739 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.739 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.739 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.739 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.739 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.739 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:50.740 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:50.740 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:50.740 Found net devices under 0000:af:00.0: cvl_0_0 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:50.740 Found net devices under 0000:af:00.1: cvl_0_1 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:29:50.740 00:29:50.740 --- 10.0.0.2 ping statistics --- 00:29:50.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.740 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:29:50.740 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:29:50.740 00:29:50.741 --- 10.0.0.1 ping statistics --- 00:29:50.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.741 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=472832 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 472832 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 472832 ']' 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.741 [2024-12-16 12:51:16.102478] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:50.741 [2024-12-16 12:51:16.102525] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.741 [2024-12-16 12:51:16.173711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.741 [2024-12-16 12:51:16.213044] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.741 [2024-12-16 12:51:16.213087] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.741 [2024-12-16 12:51:16.213093] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.741 [2024-12-16 12:51:16.213099] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.741 [2024-12-16 12:51:16.213104] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.741 [2024-12-16 12:51:16.213223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.741 [2024-12-16 12:51:16.213347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.741 [2024-12-16 12:51:16.213433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.741 [2024-12-16 12:51:16.213435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.741 [2024-12-16 12:51:16.375430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.741 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.741 Malloc1 00:29:50.741 [2024-12-16 12:51:16.475045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.741 Malloc2 00:29:50.741 Malloc3 00:29:50.741 Malloc4 00:29:50.741 Malloc5 00:29:50.741 Malloc6 00:29:50.741 Malloc7 00:29:50.741 Malloc8 00:29:50.741 Malloc9 00:29:51.001 Malloc10 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=473101 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 473101 /var/tmp/bdevperf.sock 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 473101 ']' 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.001 { 00:29:51.001 "params": { 00:29:51.001 "name": "Nvme$subsystem", 00:29:51.001 "trtype": "$TEST_TRANSPORT", 00:29:51.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.001 "adrfam": "ipv4", 00:29:51.001 "trsvcid": "$NVMF_PORT", 00:29:51.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.001 "hdgst": ${hdgst:-false}, 00:29:51.001 "ddgst": ${ddgst:-false} 00:29:51.001 }, 00:29:51.001 "method": "bdev_nvme_attach_controller" 00:29:51.001 } 00:29:51.001 EOF 00:29:51.001 )") 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.001 { 00:29:51.001 "params": { 00:29:51.001 "name": "Nvme$subsystem", 00:29:51.001 "trtype": "$TEST_TRANSPORT", 00:29:51.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.001 "adrfam": "ipv4", 00:29:51.001 "trsvcid": "$NVMF_PORT", 00:29:51.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.001 "hdgst": ${hdgst:-false}, 00:29:51.001 "ddgst": ${ddgst:-false} 00:29:51.001 }, 00:29:51.001 "method": "bdev_nvme_attach_controller" 00:29:51.001 } 00:29:51.001 EOF 00:29:51.001 )") 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.001 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.001 { 00:29:51.001 "params": { 00:29:51.001 "name": "Nvme$subsystem", 00:29:51.001 "trtype": "$TEST_TRANSPORT", 00:29:51.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.001 "adrfam": "ipv4", 00:29:51.001 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 [2024-12-16 12:51:16.939646] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:51.002 [2024-12-16 12:51:16.939693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:51.002 { 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme$subsystem", 00:29:51.002 "trtype": "$TEST_TRANSPORT", 00:29:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "$NVMF_PORT", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.002 "hdgst": ${hdgst:-false}, 00:29:51.002 "ddgst": ${ddgst:-false} 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 } 00:29:51.002 EOF 00:29:51.002 )") 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:29:51.002 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme1", 00:29:51.002 "trtype": "tcp", 00:29:51.002 "traddr": "10.0.0.2", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "4420", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.002 "hdgst": false, 00:29:51.002 "ddgst": false 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 },{ 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme2", 00:29:51.002 "trtype": "tcp", 00:29:51.002 "traddr": "10.0.0.2", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "4420", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:51.002 "hdgst": false, 00:29:51.002 "ddgst": false 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 },{ 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme3", 00:29:51.002 "trtype": "tcp", 00:29:51.002 "traddr": "10.0.0.2", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "4420", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:51.002 "hdgst": false, 00:29:51.002 "ddgst": false 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 },{ 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme4", 00:29:51.002 "trtype": "tcp", 00:29:51.002 "traddr": "10.0.0.2", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "4420", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:51.002 "hdgst": false, 00:29:51.002 "ddgst": false 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 },{ 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme5", 00:29:51.002 "trtype": "tcp", 00:29:51.002 "traddr": "10.0.0.2", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "4420", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:51.002 "hdgst": false, 00:29:51.002 "ddgst": false 00:29:51.002 }, 00:29:51.002 "method": "bdev_nvme_attach_controller" 00:29:51.002 },{ 00:29:51.002 "params": { 00:29:51.002 "name": "Nvme6", 00:29:51.002 "trtype": "tcp", 00:29:51.002 "traddr": "10.0.0.2", 00:29:51.002 "adrfam": "ipv4", 00:29:51.002 "trsvcid": "4420", 00:29:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:51.002 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:51.002 "hdgst": false, 00:29:51.002 "ddgst": false 00:29:51.003 }, 00:29:51.003 "method": "bdev_nvme_attach_controller" 00:29:51.003 },{ 00:29:51.003 "params": { 00:29:51.003 "name": "Nvme7", 00:29:51.003 "trtype": "tcp", 00:29:51.003 "traddr": "10.0.0.2", 00:29:51.003 "adrfam": "ipv4", 00:29:51.003 "trsvcid": "4420", 00:29:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:51.003 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:51.003 "hdgst": false, 00:29:51.003 "ddgst": false 00:29:51.003 }, 00:29:51.003 "method": "bdev_nvme_attach_controller" 00:29:51.003 },{ 00:29:51.003 "params": { 00:29:51.003 "name": "Nvme8", 00:29:51.003 "trtype": "tcp", 00:29:51.003 "traddr": "10.0.0.2", 00:29:51.003 "adrfam": "ipv4", 00:29:51.003 "trsvcid": "4420", 00:29:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:51.003 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:51.003 "hdgst": false, 00:29:51.003 "ddgst": false 00:29:51.003 }, 00:29:51.003 "method": "bdev_nvme_attach_controller" 00:29:51.003 },{ 00:29:51.003 "params": { 00:29:51.003 "name": "Nvme9", 00:29:51.003 "trtype": "tcp", 00:29:51.003 "traddr": "10.0.0.2", 00:29:51.003 "adrfam": "ipv4", 00:29:51.003 "trsvcid": "4420", 00:29:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:51.003 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:51.003 "hdgst": false, 00:29:51.003 "ddgst": false 00:29:51.003 }, 00:29:51.003 "method": "bdev_nvme_attach_controller" 00:29:51.003 },{ 00:29:51.003 "params": { 00:29:51.003 "name": "Nvme10", 00:29:51.003 "trtype": "tcp", 00:29:51.003 "traddr": "10.0.0.2", 00:29:51.003 "adrfam": "ipv4", 00:29:51.003 "trsvcid": "4420", 00:29:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:51.003 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:51.003 "hdgst": false, 00:29:51.003 "ddgst": false 00:29:51.003 }, 00:29:51.003 "method": "bdev_nvme_attach_controller" 00:29:51.003 }' 00:29:51.003 [2024-12-16 12:51:17.009277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.003 [2024-12-16 12:51:17.047795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 473101 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:52.908 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:53.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 473101 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 472832 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.846 { 00:29:53.846 "params": { 00:29:53.846 "name": "Nvme$subsystem", 00:29:53.846 "trtype": "$TEST_TRANSPORT", 00:29:53.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.846 "adrfam": "ipv4", 00:29:53.846 "trsvcid": "$NVMF_PORT", 00:29:53.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.846 "hdgst": ${hdgst:-false}, 00:29:53.846 "ddgst": ${ddgst:-false} 00:29:53.846 }, 00:29:53.846 "method": "bdev_nvme_attach_controller" 00:29:53.846 } 00:29:53.846 EOF 00:29:53.846 )") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.846 { 00:29:53.846 "params": { 00:29:53.846 "name": "Nvme$subsystem", 00:29:53.846 "trtype": "$TEST_TRANSPORT", 00:29:53.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.846 "adrfam": "ipv4", 00:29:53.846 "trsvcid": "$NVMF_PORT", 00:29:53.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.846 "hdgst": ${hdgst:-false}, 00:29:53.846 "ddgst": ${ddgst:-false} 00:29:53.846 }, 00:29:53.846 "method": "bdev_nvme_attach_controller" 00:29:53.846 } 00:29:53.846 EOF 00:29:53.846 )") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.846 { 00:29:53.846 "params": { 00:29:53.846 "name": "Nvme$subsystem", 00:29:53.846 "trtype": "$TEST_TRANSPORT", 00:29:53.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.846 "adrfam": "ipv4", 00:29:53.846 "trsvcid": "$NVMF_PORT", 00:29:53.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.846 "hdgst": ${hdgst:-false}, 00:29:53.846 "ddgst": ${ddgst:-false} 00:29:53.846 }, 00:29:53.846 "method": "bdev_nvme_attach_controller" 00:29:53.846 } 00:29:53.846 EOF 00:29:53.846 )") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.846 { 00:29:53.846 "params": { 00:29:53.846 "name": "Nvme$subsystem", 00:29:53.846 "trtype": "$TEST_TRANSPORT", 00:29:53.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.846 "adrfam": "ipv4", 00:29:53.846 "trsvcid": "$NVMF_PORT", 00:29:53.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.846 "hdgst": ${hdgst:-false}, 00:29:53.846 "ddgst": ${ddgst:-false} 00:29:53.846 }, 00:29:53.846 "method": "bdev_nvme_attach_controller" 00:29:53.846 } 00:29:53.846 EOF 00:29:53.846 )") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.846 { 00:29:53.846 "params": { 00:29:53.846 "name": "Nvme$subsystem", 00:29:53.846 "trtype": "$TEST_TRANSPORT", 00:29:53.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.846 "adrfam": "ipv4", 00:29:53.846 "trsvcid": "$NVMF_PORT", 00:29:53.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.846 "hdgst": ${hdgst:-false}, 00:29:53.846 "ddgst": ${ddgst:-false} 00:29:53.846 }, 00:29:53.846 "method": "bdev_nvme_attach_controller" 00:29:53.846 } 00:29:53.846 EOF 00:29:53.846 )") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.846 { 00:29:53.846 "params": { 00:29:53.846 "name": "Nvme$subsystem", 00:29:53.846 "trtype": "$TEST_TRANSPORT", 00:29:53.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.846 "adrfam": "ipv4", 00:29:53.846 "trsvcid": "$NVMF_PORT", 00:29:53.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.846 "hdgst": ${hdgst:-false}, 00:29:53.846 "ddgst": ${ddgst:-false} 00:29:53.846 }, 00:29:53.846 "method": "bdev_nvme_attach_controller" 00:29:53.846 } 00:29:53.846 EOF 00:29:53.846 )") 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.846 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.847 { 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme$subsystem", 00:29:53.847 "trtype": "$TEST_TRANSPORT", 00:29:53.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "$NVMF_PORT", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.847 "hdgst": ${hdgst:-false}, 00:29:53.847 "ddgst": ${ddgst:-false} 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 } 00:29:53.847 EOF 00:29:53.847 )") 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.847 [2024-12-16 12:51:19.870730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:53.847 [2024-12-16 12:51:19.870782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473569 ] 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.847 { 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme$subsystem", 00:29:53.847 "trtype": "$TEST_TRANSPORT", 00:29:53.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "$NVMF_PORT", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.847 "hdgst": ${hdgst:-false}, 00:29:53.847 "ddgst": ${ddgst:-false} 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 } 00:29:53.847 EOF 00:29:53.847 )") 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.847 { 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme$subsystem", 00:29:53.847 "trtype": "$TEST_TRANSPORT", 00:29:53.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "$NVMF_PORT", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.847 "hdgst": ${hdgst:-false}, 00:29:53.847 "ddgst": ${ddgst:-false} 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 } 00:29:53.847 EOF 00:29:53.847 )") 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:53.847 { 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme$subsystem", 00:29:53.847 "trtype": "$TEST_TRANSPORT", 00:29:53.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "$NVMF_PORT", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.847 "hdgst": ${hdgst:-false}, 00:29:53.847 "ddgst": ${ddgst:-false} 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 } 00:29:53.847 EOF 00:29:53.847 )") 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:29:53.847 12:51:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme1", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme2", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme3", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme4", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme5", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme6", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme7", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme8", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.847 },{ 00:29:53.847 "params": { 00:29:53.847 "name": "Nvme9", 00:29:53.847 "trtype": "tcp", 00:29:53.847 "traddr": "10.0.0.2", 00:29:53.847 "adrfam": "ipv4", 00:29:53.847 "trsvcid": "4420", 00:29:53.847 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:53.847 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:53.847 "hdgst": false, 00:29:53.847 "ddgst": false 00:29:53.847 }, 00:29:53.847 "method": "bdev_nvme_attach_controller" 00:29:53.848 },{ 00:29:53.848 "params": { 00:29:53.848 "name": "Nvme10", 00:29:53.848 "trtype": "tcp", 00:29:53.848 "traddr": "10.0.0.2", 00:29:53.848 "adrfam": "ipv4", 00:29:53.848 "trsvcid": "4420", 00:29:53.848 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:53.848 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:53.848 "hdgst": false, 00:29:53.848 "ddgst": false 00:29:53.848 }, 00:29:53.848 "method": "bdev_nvme_attach_controller" 00:29:53.848 }' 00:29:54.107 [2024-12-16 12:51:19.940301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.107 [2024-12-16 12:51:19.979352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.484 Running I/O for 1 seconds... 00:29:56.420 2314.00 IOPS, 144.62 MiB/s 00:29:56.420 Latency(us) 00:29:56.420 [2024-12-16T11:51:22.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme1n1 : 1.14 283.97 17.75 0.00 0.00 222252.36 7365.00 190740.97 00:29:56.420 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme2n1 : 1.13 283.01 17.69 0.00 0.00 221149.48 19598.38 201726.05 00:29:56.420 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme3n1 : 1.14 280.76 17.55 0.00 0.00 219896.49 14417.92 224694.86 00:29:56.420 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme4n1 : 1.08 300.38 18.77 0.00 0.00 201237.10 8426.06 191739.61 00:29:56.420 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme5n1 : 1.16 277.01 17.31 0.00 0.00 216769.10 17101.78 213709.78 00:29:56.420 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme6n1 : 1.16 276.56 17.28 0.00 0.00 214082.90 16352.79 221698.93 00:29:56.420 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme7n1 : 1.15 279.05 17.44 0.00 0.00 208942.62 27587.54 202724.69 00:29:56.420 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme8n1 : 1.15 278.59 17.41 0.00 0.00 206223.85 13044.78 218702.99 00:29:56.420 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme9n1 : 1.16 275.46 17.22 0.00 0.00 205748.52 19972.88 223696.21 00:29:56.420 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:56.420 Verification LBA range: start 0x0 length 0x400 00:29:56.420 Nvme10n1 : 1.16 275.04 17.19 0.00 0.00 203007.90 15978.30 232684.01 00:29:56.420 [2024-12-16T11:51:22.487Z] =================================================================================================================== 00:29:56.420 [2024-12-16T11:51:22.487Z] Total : 2809.83 175.61 0.00 0.00 211930.45 7365.00 232684.01 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.679 rmmod nvme_tcp 00:29:56.679 rmmod nvme_fabrics 00:29:56.679 rmmod nvme_keyring 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:56.679 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 472832 ']' 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 472832 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 472832 ']' 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 472832 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472832 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472832' 00:29:56.939 killing process with pid 472832 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 472832 00:29:56.939 12:51:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 472832 00:29:57.197 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:57.197 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:57.197 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:57.197 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.198 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.733 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.733 00:29:59.733 real 0m15.152s 00:29:59.733 user 0m33.569s 00:29:59.733 sys 0m5.775s 00:29:59.733 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.734 ************************************ 00:29:59.734 END TEST nvmf_shutdown_tc1 00:29:59.734 ************************************ 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:59.734 ************************************ 00:29:59.734 START TEST nvmf_shutdown_tc2 00:29:59.734 ************************************ 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.734 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.734 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.734 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.735 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:29:59.735 00:29:59.735 --- 10.0.0.2 ping statistics --- 00:29:59.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.735 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:59.735 00:29:59.735 --- 10.0.0.1 ping statistics --- 00:29:59.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.735 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=474573 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 474573 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 474573 ']' 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.735 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.735 [2024-12-16 12:51:25.771507] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:59.735 [2024-12-16 12:51:25.771547] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.994 [2024-12-16 12:51:25.842197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.994 [2024-12-16 12:51:25.881983] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.994 [2024-12-16 12:51:25.882021] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.994 [2024-12-16 12:51:25.882028] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.994 [2024-12-16 12:51:25.882034] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.994 [2024-12-16 12:51:25.882039] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.994 [2024-12-16 12:51:25.882099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.994 [2024-12-16 12:51:25.882209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.994 [2024-12-16 12:51:25.882315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.995 [2024-12-16 12:51:25.882316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:59.995 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:59.995 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:59.995 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:59.995 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.995 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.995 [2024-12-16 12:51:26.023675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.995 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:00.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:00.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.254 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.254 Malloc1 00:30:00.254 [2024-12-16 12:51:26.118954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.254 Malloc2 00:30:00.254 Malloc3 00:30:00.254 Malloc4 00:30:00.254 Malloc5 00:30:00.254 Malloc6 00:30:00.513 Malloc7 00:30:00.513 Malloc8 00:30:00.513 Malloc9 00:30:00.513 Malloc10 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=474713 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 474713 /var/tmp/bdevperf.sock 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 474713 ']' 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.513 { 00:30:00.513 "params": { 00:30:00.513 "name": "Nvme$subsystem", 00:30:00.513 "trtype": "$TEST_TRANSPORT", 00:30:00.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.513 "adrfam": "ipv4", 00:30:00.513 "trsvcid": "$NVMF_PORT", 00:30:00.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.513 "hdgst": ${hdgst:-false}, 00:30:00.513 "ddgst": ${ddgst:-false} 00:30:00.513 }, 00:30:00.513 "method": "bdev_nvme_attach_controller" 00:30:00.513 } 00:30:00.513 EOF 00:30:00.513 )") 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.513 { 00:30:00.513 "params": { 00:30:00.513 "name": "Nvme$subsystem", 00:30:00.513 "trtype": "$TEST_TRANSPORT", 00:30:00.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.513 "adrfam": "ipv4", 00:30:00.513 "trsvcid": "$NVMF_PORT", 00:30:00.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.513 "hdgst": ${hdgst:-false}, 00:30:00.513 "ddgst": ${ddgst:-false} 00:30:00.513 }, 00:30:00.513 "method": "bdev_nvme_attach_controller" 00:30:00.513 } 00:30:00.513 EOF 00:30:00.513 )") 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.513 { 00:30:00.513 "params": { 00:30:00.513 "name": "Nvme$subsystem", 00:30:00.513 "trtype": "$TEST_TRANSPORT", 00:30:00.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.513 "adrfam": "ipv4", 00:30:00.513 "trsvcid": "$NVMF_PORT", 00:30:00.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.513 "hdgst": ${hdgst:-false}, 00:30:00.513 "ddgst": ${ddgst:-false} 00:30:00.513 }, 00:30:00.513 "method": "bdev_nvme_attach_controller" 00:30:00.513 } 00:30:00.513 EOF 00:30:00.513 )") 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.513 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.513 { 00:30:00.513 "params": { 00:30:00.513 "name": "Nvme$subsystem", 00:30:00.513 "trtype": "$TEST_TRANSPORT", 00:30:00.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.513 "adrfam": "ipv4", 00:30:00.513 "trsvcid": "$NVMF_PORT", 00:30:00.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.513 "hdgst": ${hdgst:-false}, 00:30:00.513 "ddgst": ${ddgst:-false} 00:30:00.513 }, 00:30:00.513 "method": "bdev_nvme_attach_controller" 00:30:00.513 } 00:30:00.513 EOF 00:30:00.513 )") 00:30:00.514 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.514 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.514 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.514 { 00:30:00.514 "params": { 00:30:00.514 "name": "Nvme$subsystem", 00:30:00.514 "trtype": "$TEST_TRANSPORT", 00:30:00.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.514 "adrfam": "ipv4", 00:30:00.514 "trsvcid": "$NVMF_PORT", 00:30:00.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.514 "hdgst": ${hdgst:-false}, 00:30:00.514 "ddgst": ${ddgst:-false} 00:30:00.514 }, 00:30:00.514 "method": "bdev_nvme_attach_controller" 00:30:00.514 } 00:30:00.514 EOF 00:30:00.514 )") 00:30:00.514 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.776 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.776 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.776 { 00:30:00.776 "params": { 00:30:00.776 "name": "Nvme$subsystem", 00:30:00.776 "trtype": "$TEST_TRANSPORT", 00:30:00.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.776 "adrfam": "ipv4", 00:30:00.776 "trsvcid": "$NVMF_PORT", 00:30:00.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.776 "hdgst": ${hdgst:-false}, 00:30:00.776 "ddgst": ${ddgst:-false} 00:30:00.776 }, 00:30:00.776 "method": "bdev_nvme_attach_controller" 00:30:00.776 } 00:30:00.776 EOF 00:30:00.776 )") 00:30:00.776 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.776 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.776 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.776 { 00:30:00.776 "params": { 00:30:00.776 "name": "Nvme$subsystem", 00:30:00.776 "trtype": "$TEST_TRANSPORT", 00:30:00.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.776 "adrfam": "ipv4", 00:30:00.776 "trsvcid": "$NVMF_PORT", 00:30:00.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.776 "hdgst": ${hdgst:-false}, 00:30:00.777 "ddgst": ${ddgst:-false} 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 } 00:30:00.777 EOF 00:30:00.777 )") 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.777 [2024-12-16 12:51:26.589800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:00.777 [2024-12-16 12:51:26.589847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474713 ] 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.777 { 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme$subsystem", 00:30:00.777 "trtype": "$TEST_TRANSPORT", 00:30:00.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "$NVMF_PORT", 00:30:00.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.777 "hdgst": ${hdgst:-false}, 00:30:00.777 "ddgst": ${ddgst:-false} 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 } 00:30:00.777 EOF 00:30:00.777 )") 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.777 { 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme$subsystem", 00:30:00.777 "trtype": "$TEST_TRANSPORT", 00:30:00.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "$NVMF_PORT", 00:30:00.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.777 "hdgst": ${hdgst:-false}, 00:30:00.777 "ddgst": ${ddgst:-false} 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 } 00:30:00.777 EOF 00:30:00.777 )") 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:00.777 { 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme$subsystem", 00:30:00.777 "trtype": "$TEST_TRANSPORT", 00:30:00.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "$NVMF_PORT", 00:30:00.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.777 "hdgst": ${hdgst:-false}, 00:30:00.777 "ddgst": ${ddgst:-false} 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 } 00:30:00.777 EOF 00:30:00.777 )") 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:30:00.777 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme1", 00:30:00.777 "trtype": "tcp", 00:30:00.777 "traddr": "10.0.0.2", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "4420", 00:30:00.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:00.777 "hdgst": false, 00:30:00.777 "ddgst": false 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 },{ 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme2", 00:30:00.777 "trtype": "tcp", 00:30:00.777 "traddr": "10.0.0.2", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "4420", 00:30:00.777 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:00.777 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:00.777 "hdgst": false, 00:30:00.777 "ddgst": false 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 },{ 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme3", 00:30:00.777 "trtype": "tcp", 00:30:00.777 "traddr": "10.0.0.2", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "4420", 00:30:00.777 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:00.777 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:00.777 "hdgst": false, 00:30:00.777 "ddgst": false 00:30:00.777 }, 00:30:00.777 "method": "bdev_nvme_attach_controller" 00:30:00.777 },{ 00:30:00.777 "params": { 00:30:00.777 "name": "Nvme4", 00:30:00.777 "trtype": "tcp", 00:30:00.777 "traddr": "10.0.0.2", 00:30:00.777 "adrfam": "ipv4", 00:30:00.777 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 },{ 00:30:00.778 "params": { 00:30:00.778 "name": "Nvme5", 00:30:00.778 "trtype": "tcp", 00:30:00.778 "traddr": "10.0.0.2", 00:30:00.778 "adrfam": "ipv4", 00:30:00.778 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 },{ 00:30:00.778 "params": { 00:30:00.778 "name": "Nvme6", 00:30:00.778 "trtype": "tcp", 00:30:00.778 "traddr": "10.0.0.2", 00:30:00.778 "adrfam": "ipv4", 00:30:00.778 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 },{ 00:30:00.778 "params": { 00:30:00.778 "name": "Nvme7", 00:30:00.778 "trtype": "tcp", 00:30:00.778 "traddr": "10.0.0.2", 00:30:00.778 "adrfam": "ipv4", 00:30:00.778 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 },{ 00:30:00.778 "params": { 00:30:00.778 "name": "Nvme8", 00:30:00.778 "trtype": "tcp", 00:30:00.778 "traddr": "10.0.0.2", 00:30:00.778 "adrfam": "ipv4", 00:30:00.778 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 },{ 00:30:00.778 "params": { 00:30:00.778 "name": "Nvme9", 00:30:00.778 "trtype": "tcp", 00:30:00.778 "traddr": "10.0.0.2", 00:30:00.778 "adrfam": "ipv4", 00:30:00.778 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 },{ 00:30:00.778 "params": { 00:30:00.778 "name": "Nvme10", 00:30:00.778 "trtype": "tcp", 00:30:00.778 "traddr": "10.0.0.2", 00:30:00.778 "adrfam": "ipv4", 00:30:00.778 "trsvcid": "4420", 00:30:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:00.778 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:00.778 "hdgst": false, 00:30:00.778 "ddgst": false 00:30:00.778 }, 00:30:00.778 "method": "bdev_nvme_attach_controller" 00:30:00.778 }' 00:30:00.778 [2024-12-16 12:51:26.661331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.778 [2024-12-16 12:51:26.699720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.688 Running I/O for 10 seconds... 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:02.688 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 474713 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 474713 ']' 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 474713 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474713 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474713' 00:30:02.948 killing process with pid 474713 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 474713 00:30:02.948 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 474713 00:30:02.948 Received shutdown signal, test time was about 0.656605 seconds 00:30:02.948 00:30:02.948 Latency(us) 00:30:02.948 [2024-12-16T11:51:29.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.948 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme1n1 : 0.64 301.34 18.83 0.00 0.00 208842.69 15978.30 205720.62 00:30:02.948 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme2n1 : 0.66 292.70 18.29 0.00 0.00 209553.64 17975.59 214708.42 00:30:02.948 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme3n1 : 0.63 305.39 19.09 0.00 0.00 195766.86 14917.24 197731.47 00:30:02.948 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme4n1 : 0.63 302.49 18.91 0.00 0.00 192973.45 15915.89 199728.76 00:30:02.948 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme5n1 : 0.65 301.70 18.86 0.00 0.00 188054.41 3245.59 207717.91 00:30:02.948 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme6n1 : 0.64 299.41 18.71 0.00 0.00 184971.13 15229.32 209715.20 00:30:02.948 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme7n1 : 0.65 296.21 18.51 0.00 0.00 182245.83 13544.11 206719.27 00:30:02.948 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme8n1 : 0.65 293.46 18.34 0.00 0.00 179051.76 13481.69 214708.42 00:30:02.948 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme9n1 : 0.62 205.53 12.85 0.00 0.00 245520.09 33953.89 215707.06 00:30:02.948 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.948 Verification LBA range: start 0x0 length 0x400 00:30:02.948 Nvme10n1 : 0.63 203.29 12.71 0.00 0.00 239156.18 17850.76 231685.36 00:30:02.948 [2024-12-16T11:51:29.015Z] =================================================================================================================== 00:30:02.948 [2024-12-16T11:51:29.015Z] Total : 2801.51 175.09 0.00 0.00 199750.03 3245.59 231685.36 00:30:03.208 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 474573 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.146 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.146 rmmod nvme_tcp 00:30:04.405 rmmod nvme_fabrics 00:30:04.405 rmmod nvme_keyring 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 474573 ']' 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 474573 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 474573 ']' 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 474573 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474573 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474573' 00:30:04.405 killing process with pid 474573 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 474573 00:30:04.405 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 474573 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.665 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.203 00:30:07.203 real 0m7.453s 00:30:07.203 user 0m21.645s 00:30:07.203 sys 0m1.279s 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:07.203 ************************************ 00:30:07.203 END TEST nvmf_shutdown_tc2 00:30:07.203 ************************************ 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:07.203 ************************************ 00:30:07.203 START TEST nvmf_shutdown_tc3 00:30:07.203 ************************************ 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.203 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:07.204 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:07.204 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:07.204 Found net devices under 0000:af:00.0: cvl_0_0 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:07.204 Found net devices under 0000:af:00.1: cvl_0_1 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.204 12:51:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:30:07.204 00:30:07.204 --- 10.0.0.2 ping statistics --- 00:30:07.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.204 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:30:07.204 00:30:07.204 --- 10.0.0.1 ping statistics --- 00:30:07.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.204 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=475853 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 475853 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 475853 ']' 00:30:07.204 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.205 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.205 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.205 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.205 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.205 [2024-12-16 12:51:33.174990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:07.205 [2024-12-16 12:51:33.175038] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.205 [2024-12-16 12:51:33.244579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.464 [2024-12-16 12:51:33.284227] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.464 [2024-12-16 12:51:33.284267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.464 [2024-12-16 12:51:33.284275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.464 [2024-12-16 12:51:33.284281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.464 [2024-12-16 12:51:33.284287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.464 [2024-12-16 12:51:33.284407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.464 [2024-12-16 12:51:33.284524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.464 [2024-12-16 12:51:33.284630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.464 [2024-12-16 12:51:33.284632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.464 [2024-12-16 12:51:33.443255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.464 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.465 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.465 Malloc1 00:30:07.724 [2024-12-16 12:51:33.542912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.724 Malloc2 00:30:07.724 Malloc3 00:30:07.724 Malloc4 00:30:07.724 Malloc5 00:30:07.724 Malloc6 00:30:07.724 Malloc7 00:30:07.983 Malloc8 00:30:07.984 Malloc9 00:30:07.984 Malloc10 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=476124 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 476124 /var/tmp/bdevperf.sock 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 476124 ']' 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:07.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 [2024-12-16 12:51:34.019676] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:07.984 [2024-12-16 12:51:34.019722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476124 ] 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.984 "ddgst": ${ddgst:-false} 00:30:07.984 }, 00:30:07.984 "method": "bdev_nvme_attach_controller" 00:30:07.984 } 00:30:07.984 EOF 00:30:07.984 )") 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.984 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.984 { 00:30:07.984 "params": { 00:30:07.984 "name": "Nvme$subsystem", 00:30:07.984 "trtype": "$TEST_TRANSPORT", 00:30:07.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.984 "adrfam": "ipv4", 00:30:07.984 "trsvcid": "$NVMF_PORT", 00:30:07.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.984 "hdgst": ${hdgst:-false}, 00:30:07.985 "ddgst": ${ddgst:-false} 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 } 00:30:07.985 EOF 00:30:07.985 )") 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:07.985 { 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme$subsystem", 00:30:07.985 "trtype": "$TEST_TRANSPORT", 00:30:07.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "$NVMF_PORT", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.985 "hdgst": ${hdgst:-false}, 00:30:07.985 "ddgst": ${ddgst:-false} 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 } 00:30:07.985 EOF 00:30:07.985 )") 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:30:07.985 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme1", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme2", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme3", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme4", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme5", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme6", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme7", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme8", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme9", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 },{ 00:30:07.985 "params": { 00:30:07.985 "name": "Nvme10", 00:30:07.985 "trtype": "tcp", 00:30:07.985 "traddr": "10.0.0.2", 00:30:07.985 "adrfam": "ipv4", 00:30:07.985 "trsvcid": "4420", 00:30:07.985 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:07.985 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:07.985 "hdgst": false, 00:30:07.985 "ddgst": false 00:30:07.985 }, 00:30:07.985 "method": "bdev_nvme_attach_controller" 00:30:07.985 }' 00:30:08.244 [2024-12-16 12:51:34.091160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.244 [2024-12-16 12:51:34.129750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.150 Running I/O for 10 seconds... 00:30:10.150 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.150 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:10.150 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:10.150 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.150 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:10.150 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=85 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 85 -ge 100 ']' 00:30:10.408 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.667 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 475853 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 475853 ']' 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 475853 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475853 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475853' 00:30:10.942 killing process with pid 475853 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 475853 00:30:10.942 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 475853 00:30:10.942 [2024-12-16 12:51:36.813727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.813998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.942 [2024-12-16 12:51:36.814146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.814189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f50d60 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.816307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51230 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.943 [2024-12-16 12:51:36.817794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.817936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51700 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.944 [2024-12-16 12:51:36.819459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bf0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.820414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f520c0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.945 [2024-12-16 12:51:36.821421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.821586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f525b0 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.946 [2024-12-16 12:51:36.822774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.822910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52a80 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18126e0 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c336a0 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-16 12:51:36.823674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with id:0 cdw10:00000000 cdw11:00000000 00:30:10.947 the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with [2024-12-16 12:51:36.823697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:30:10.947 id:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 12:51:36.823739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8bcd0 is same [2024-12-16 12:51:36.823748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with with the state(6) to be set 00:30:10.947 the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with [2024-12-16 12:51:36.823802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:30:10.947 id:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.947 [2024-12-16 12:51:36.823824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.947 [2024-12-16 12:51:36.823831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.947 [2024-12-16 12:51:36.823839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.823844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.823851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e610 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.823884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.823891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.823897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.823904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-16 12:51:36.823912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with id:0 cdw10:00000000 cdw11:00000000 00:30:10.948 the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 12:51:36.823920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with [2024-12-16 12:51:36.823930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:30:10.948 id:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.823938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.823944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6c0d0 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.823976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 12:51:36.823982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.823996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.823998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-16 12:51:36.824023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with id:0 cdw10:00000000 cdw11:00000000 00:30:10.948 the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 12:51:36.824031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with [2024-12-16 12:51:36.824041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810670 is same the state(6) to be set 00:30:10.948 with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 12:51:36.824093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f52f50 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c377f0 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c366b0 is same with the state(6) to be set 00:30:10.948 [2024-12-16 12:51:36.824242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.948 [2024-12-16 12:51:36.824270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.948 [2024-12-16 12:51:36.824277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.824284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.949 [2024-12-16 12:51:36.824292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.824298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812b40 is same with the state(6) to be set 00:30:10.949 [2024-12-16 12:51:36.825326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.949 [2024-12-16 12:51:36.825907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.949 [2024-12-16 12:51:36.825914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.825921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.825928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.825935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.825945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.825951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.825959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.825965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.825973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.825980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.825988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.825994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.950 [2024-12-16 12:51:36.826374] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c19150 was disconnected and freed. reset controller. 00:30:10.950 [2024-12-16 12:51:36.826408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.950 [2024-12-16 12:51:36.826605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-12-16 12:51:36.826611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.826894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.826900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-12-16 12:51:36.840900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.951 [2024-12-16 12:51:36.840910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.840919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.840930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.840938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.840949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.840958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.840968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.840977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.840988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.840997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1a6d0 is same with the state(6) to be set 00:30:10.952 [2024-12-16 12:51:36.841292] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c1a6d0 was disconnected and freed. reset controller. 00:30:10.952 [2024-12-16 12:51:36.841349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18126e0 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c336a0 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8bcd0 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.952 [2024-12-16 12:51:36.841448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.952 [2024-12-16 12:51:36.841467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.952 [2024-12-16 12:51:36.841486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.952 [2024-12-16 12:51:36.841504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6c4a0 is same with the state(6) to be set 00:30:10.952 [2024-12-16 12:51:36.841528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171e610 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6c0d0 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810670 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c377f0 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c366b0 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812b40 (9): Bad file descriptor 00:30:10.952 [2024-12-16 12:51:36.841742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.841984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.842005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.842016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.842025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.842036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.952 [2024-12-16 12:51:36.842045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.952 [2024-12-16 12:51:36.842056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.953 [2024-12-16 12:51:36.842864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.953 [2024-12-16 12:51:36.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.842884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.842893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.842904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.842913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.842923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.842932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.842945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.842954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.842965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.842975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.842988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.842997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843128] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d00cd0 was disconnected and freed. reset controller. 00:30:10.954 [2024-12-16 12:51:36.843298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.954 [2024-12-16 12:51:36.843946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.954 [2024-12-16 12:51:36.843961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.843973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.843988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.955 [2024-12-16 12:51:36.844868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.955 [2024-12-16 12:51:36.844962] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c13c90 was disconnected and freed. reset controller. 00:30:10.955 [2024-12-16 12:51:36.852308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:10.956 [2024-12-16 12:51:36.852394] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.956 [2024-12-16 12:51:36.852419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6c4a0 (9): Bad file descriptor 00:30:10.956 [2024-12-16 12:51:36.852474] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.956 [2024-12-16 12:51:36.853620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:10.956 [2024-12-16 12:51:36.853653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:10.956 [2024-12-16 12:51:36.853992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-12-16 12:51:36.854030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18126e0 with addr=10.0.0.2, port=4420 00:30:10.956 [2024-12-16 12:51:36.854040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18126e0 is same with the state(6) to be set 00:30:10.956 [2024-12-16 12:51:36.854102] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:10.956 [2024-12-16 12:51:36.854408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.854987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.854997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.855005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.855016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.855024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.855033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.855041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.855051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.956 [2024-12-16 12:51:36.855059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.956 [2024-12-16 12:51:36.855068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.855445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.855452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.856620] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:10.957 [2024-12-16 12:51:36.856918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.856932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.856946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.856954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.856965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.856973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.856984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.856991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.957 [2024-12-16 12:51:36.857246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.957 [2024-12-16 12:51:36.857256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.958 [2024-12-16 12:51:36.857966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.958 [2024-12-16 12:51:36.857974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.857984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.857992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.858001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.858009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.858019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.858027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.858038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.858047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.858056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.858064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.959 [2024-12-16 12:51:36.859795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.959 [2024-12-16 12:51:36.859803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.859988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.859996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.860382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.960 [2024-12-16 12:51:36.861682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.960 [2024-12-16 12:51:36.861690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.861987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.861995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.961 [2024-12-16 12:51:36.862407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.961 [2024-12-16 12:51:36.862417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.862698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.862706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.863874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:10.962 [2024-12-16 12:51:36.863891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:10.962 [2024-12-16 12:51:36.863901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:10.962 [2024-12-16 12:51:36.864179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.962 [2024-12-16 12:51:36.864194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6c4a0 with addr=10.0.0.2, port=4420 00:30:10.962 [2024-12-16 12:51:36.864202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6c4a0 is same with the state(6) to be set 00:30:10.962 [2024-12-16 12:51:36.864291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.962 [2024-12-16 12:51:36.864301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c8bcd0 with addr=10.0.0.2, port=4420 00:30:10.962 [2024-12-16 12:51:36.864312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8bcd0 is same with the state(6) to be set 00:30:10.962 [2024-12-16 12:51:36.864323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18126e0 (9): Bad file descriptor 00:30:10.962 [2024-12-16 12:51:36.864346] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.962 [2024-12-16 12:51:36.864356] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.962 [2024-12-16 12:51:36.864367] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.962 [2024-12-16 12:51:36.864379] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.962 [2024-12-16 12:51:36.864387] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.962 [2024-12-16 12:51:36.864396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8bcd0 (9): Bad file descriptor 00:30:10.962 [2024-12-16 12:51:36.864407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6c4a0 (9): Bad file descriptor 00:30:10.962 [2024-12-16 12:51:36.864452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.962 [2024-12-16 12:51:36.864692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.962 [2024-12-16 12:51:36.864699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.864986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.864992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.963 [2024-12-16 12:51:36.865290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.963 [2024-12-16 12:51:36.865297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.865393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.865400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cffa10 is same with the state(6) to be set 00:30:10.964 [2024-12-16 12:51:36.866412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.964 [2024-12-16 12:51:36.866838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.964 [2024-12-16 12:51:36.866844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.866991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.866999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.965 [2024-12-16 12:51:36.867367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.965 [2024-12-16 12:51:36.867375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18930 is same with the state(6) to be set 00:30:10.965 [2024-12-16 12:51:36.868367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:10.965 [2024-12-16 12:51:36.868383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:10.965 [2024-12-16 12:51:36.868392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.965 task offset: 24576 on job bdev=Nvme9n1 fails 00:30:10.965 00:30:10.965 Latency(us) 00:30:10.965 [2024-12-16T11:51:37.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.965 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.965 Job: Nvme1n1 ended in about 0.92 seconds with error 00:30:10.965 Verification LBA range: start 0x0 length 0x400 00:30:10.965 Nvme1n1 : 0.92 211.92 13.24 69.20 0.00 225538.18 15042.07 207717.91 00:30:10.965 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.965 Job: Nvme2n1 ended in about 0.91 seconds with error 00:30:10.965 Verification LBA range: start 0x0 length 0x400 00:30:10.965 Nvme2n1 : 0.91 211.39 13.21 70.46 0.00 220990.17 24341.94 211712.49 00:30:10.965 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.965 Job: Nvme3n1 ended in about 0.91 seconds with error 00:30:10.965 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme3n1 : 0.91 216.39 13.52 63.39 0.00 218668.37 21970.16 220700.28 00:30:10.966 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme4n1 ended in about 0.93 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme4n1 : 0.93 212.55 13.28 69.05 0.00 213687.89 13606.52 218702.99 00:30:10.966 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme5n1 ended in about 0.91 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme5n1 : 0.91 210.96 13.18 70.32 0.00 209760.79 16103.13 215707.06 00:30:10.966 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme6n1 ended in about 0.92 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme6n1 : 0.92 213.59 13.35 69.74 0.00 204594.51 29959.31 196732.83 00:30:10.966 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme7n1 ended in about 0.92 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme7n1 : 0.92 208.71 13.04 69.57 0.00 204527.79 14605.17 211712.49 00:30:10.966 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme8n1 ended in about 0.92 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme8n1 : 0.92 208.18 13.01 69.39 0.00 201304.62 13044.78 217704.35 00:30:10.966 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme9n1 ended in about 0.90 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme9n1 : 0.90 212.18 13.26 70.73 0.00 193070.81 31956.60 222697.57 00:30:10.966 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.966 Job: Nvme10n1 ended in about 0.91 seconds with error 00:30:10.966 Verification LBA range: start 0x0 length 0x400 00:30:10.966 Nvme10n1 : 0.91 211.81 13.24 70.60 0.00 189647.24 16976.94 240673.16 00:30:10.966 [2024-12-16T11:51:37.033Z] =================================================================================================================== 00:30:10.966 [2024-12-16T11:51:37.033Z] Total : 2117.69 132.36 692.46 0.00 208211.16 13044.78 240673.16 00:30:10.966 [2024-12-16 12:51:36.904308] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:10.966 [2024-12-16 12:51:36.904352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:10.966 [2024-12-16 12:51:36.904673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.904691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c366b0 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.904702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c366b0 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.904830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.904841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1810670 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.904848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810670 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.905050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.905060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c336a0 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.905067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c336a0 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.905078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.905085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.905094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:10.966 [2024-12-16 12:51:36.906036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.966 [2024-12-16 12:51:36.906293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.906308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171e610 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.906317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e610 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.906554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.906563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6c0d0 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.906570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6c0d0 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.906761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.906771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812b40 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.906778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812b40 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.906993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.966 [2024-12-16 12:51:36.907003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c377f0 with addr=10.0.0.2, port=4420 00:30:10.966 [2024-12-16 12:51:36.907010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c377f0 is same with the state(6) to be set 00:30:10.966 [2024-12-16 12:51:36.907023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c366b0 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810670 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c336a0 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907123] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.966 [2024-12-16 12:51:36.907134] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.966 [2024-12-16 12:51:36.907143] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.966 [2024-12-16 12:51:36.907154] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.966 [2024-12-16 12:51:36.907163] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.966 [2024-12-16 12:51:36.907622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.966 [2024-12-16 12:51:36.907631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.966 [2024-12-16 12:51:36.907642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171e610 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6c0d0 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812b40 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c377f0 (9): Bad file descriptor 00:30:10.966 [2024-12-16 12:51:36.907678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:10.966 [2024-12-16 12:51:36.907801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.966 [2024-12-16 12:51:36.907807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.966 [2024-12-16 12:51:36.907812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.966 [2024-12-16 12:51:36.907824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:10.966 [2024-12-16 12:51:36.907890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:10.966 [2024-12-16 12:51:36.907896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:10.966 [2024-12-16 12:51:36.907928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.967 [2024-12-16 12:51:36.907935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.967 [2024-12-16 12:51:36.907940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.967 [2024-12-16 12:51:36.907946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.967 [2024-12-16 12:51:36.908169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.967 [2024-12-16 12:51:36.908182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18126e0 with addr=10.0.0.2, port=4420 00:30:10.967 [2024-12-16 12:51:36.908190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18126e0 is same with the state(6) to be set 00:30:10.967 [2024-12-16 12:51:36.908217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18126e0 (9): Bad file descriptor 00:30:10.967 [2024-12-16 12:51:36.908241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:10.967 [2024-12-16 12:51:36.908248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:10.967 [2024-12-16 12:51:36.908254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:10.967 [2024-12-16 12:51:36.908277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.226 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:30:11.226 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 476124 00:30:12.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 143: kill: (476124) - No such process 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # true 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.606 rmmod nvme_tcp 00:30:12.606 rmmod nvme_fabrics 00:30:12.606 rmmod nvme_keyring 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.606 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.512 00:30:14.512 real 0m7.606s 00:30:14.512 user 0m18.578s 00:30:14.512 sys 0m1.354s 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:14.512 ************************************ 00:30:14.512 END TEST nvmf_shutdown_tc3 00:30:14.512 ************************************ 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ e810 == \e\8\1\0 ]] 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ tcp == \r\d\m\a ]] 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:14.512 ************************************ 00:30:14.512 START TEST nvmf_shutdown_tc4 00:30:14.512 ************************************ 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:14.512 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:14.512 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:14.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:14.513 Found net devices under 0000:af:00.0: cvl_0_0 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:14.513 Found net devices under 0000:af:00.1: cvl_0_1 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.513 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:30:14.772 00:30:14.772 --- 10.0.0.2 ping statistics --- 00:30:14.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.772 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:14.772 00:30:14.772 --- 10.0.0.1 ping statistics --- 00:30:14.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.772 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=477164 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 477164 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 477164 ']' 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.772 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:14.773 [2024-12-16 12:51:40.811502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:14.773 [2024-12-16 12:51:40.811544] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.032 [2024-12-16 12:51:40.885248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.032 [2024-12-16 12:51:40.926058] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.032 [2024-12-16 12:51:40.926097] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.032 [2024-12-16 12:51:40.926108] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.032 [2024-12-16 12:51:40.926120] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.032 [2024-12-16 12:51:40.926125] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.032 [2024-12-16 12:51:40.926236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.032 [2024-12-16 12:51:40.926346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.032 [2024-12-16 12:51:40.926454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.032 [2024-12-16 12:51:40.926455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.032 [2024-12-16 12:51:41.078477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.032 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.292 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.292 Malloc1 00:30:15.292 [2024-12-16 12:51:41.178047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.292 Malloc2 00:30:15.292 Malloc3 00:30:15.292 Malloc4 00:30:15.292 Malloc5 00:30:15.551 Malloc6 00:30:15.551 Malloc7 00:30:15.551 Malloc8 00:30:15.551 Malloc9 00:30:15.551 Malloc10 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=477407 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:30:15.551 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:15.810 [2024-12-16 12:51:41.678063] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 477164 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 477164 ']' 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 477164 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477164 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477164' 00:30:21.089 killing process with pid 477164 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 477164 00:30:21.089 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 477164 00:30:21.089 [2024-12-16 12:51:46.686451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.686562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57ae0 is same with the state(6) to be set 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 [2024-12-16 12:51:46.689986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with t[2024-12-16 12:51:46.689991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devihe state(6) to be set 00:30:21.090 ce or address) on qpair id 4 00:30:21.090 [2024-12-16 12:51:46.690033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5920 is same with the state(6) to be set 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 [2024-12-16 12:51:46.690609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with tWrite completed with error (sct=0, sc=8) 00:30:21.090 he state(6) to be set 00:30:21.090 starting I/O failed: -6 00:30:21.090 [2024-12-16 12:51:46.690635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with tWrite completed with error (sct=0, sc=8) 00:30:21.090 he state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with the state(6) to be set 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 [2024-12-16 12:51:46.690658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with the state(6) to be set 00:30:21.090 starting I/O failed: -6 00:30:21.090 [2024-12-16 12:51:46.690665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5df0 is same with the state(6) to be set 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 [2024-12-16 12:51:46.690903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.090 [2024-12-16 12:51:46.690950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e62c0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e62c0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e62c0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e62c0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e62c0 is same with the state(6) to be set 00:30:21.090 [2024-12-16 12:51:46.690999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e62c0 is same with the state(6) to be set 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.090 starting I/O failed: -6 00:30:21.090 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.691276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5450 is same with the state(6) to be set 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.691298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5450 is same with the state(6) to be set 00:30:21.091 starting I/O failed: -6 00:30:21.091 [2024-12-16 12:51:46.691306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5450 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.691313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5450 is same with the state(6) to be set 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.691322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5450 is same with tstarting I/O failed: -6 00:30:21.091 he state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.691330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5450 is same with the state(6) to be set 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.691885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.692916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3a50 is same with tstarting I/O failed: -6 00:30:21.091 he state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.692935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3a50 is same with the state(6) to be set 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.692941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3a50 is same with the state(6) to be set 00:30:21.091 starting I/O failed: -6 00:30:21.091 [2024-12-16 12:51:46.692947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3a50 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.692954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3a50 is same with the state(6) to be set 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 [2024-12-16 12:51:46.692960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3a50 is same with the state(6) to be set 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.091 starting I/O failed: -6 00:30:21.091 [2024-12-16 12:51:46.693235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.693248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.693255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.693261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.693266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.091 [2024-12-16 12:51:46.693272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.091 Write completed with error (sct=0, sc=8) 00:30:21.092 [2024-12-16 12:51:46.693278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.092 starting I/O failed: -6 00:30:21.092 [2024-12-16 12:51:46.693285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3f20 is same with the state(6) to be set 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 [2024-12-16 12:51:46.693325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.092 NVMe io qpair process completion error 00:30:21.092 [2024-12-16 12:51:46.693587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac43f0 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.693599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac43f0 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.693606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac43f0 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.693612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac43f0 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.693618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac43f0 is same with the state(6) to be set 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 [2024-12-16 12:51:46.694002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 [2024-12-16 12:51:46.694022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 [2024-12-16 12:51:46.694036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with tWrite completed with error (sct=0, sc=8) 00:30:21.092 he state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with tWrite completed with error (sct=0, sc=8) 00:30:21.092 he state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 [2024-12-16 12:51:46.694075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3580 is same with the state(6) to be set 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 [2024-12-16 12:51:46.694486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 [2024-12-16 12:51:46.695270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 Write completed with error (sct=0, sc=8) 00:30:21.092 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 [2024-12-16 12:51:46.696278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 [2024-12-16 12:51:46.698203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.093 NVMe io qpair process completion error 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 [2024-12-16 12:51:46.699046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 Write completed with error (sct=0, sc=8) 00:30:21.093 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 [2024-12-16 12:51:46.699919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 [2024-12-16 12:51:46.700896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.094 starting I/O failed: -6 00:30:21.094 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 [2024-12-16 12:51:46.702831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.095 NVMe io qpair process completion error 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 [2024-12-16 12:51:46.704058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.095 starting I/O failed: -6 00:30:21.095 starting I/O failed: -6 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 [2024-12-16 12:51:46.704990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 starting I/O failed: -6 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.095 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 [2024-12-16 12:51:46.706048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 [2024-12-16 12:51:46.707937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.096 NVMe io qpair process completion error 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 [2024-12-16 12:51:46.708957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.096 starting I/O failed: -6 00:30:21.096 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 [2024-12-16 12:51:46.709768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 [2024-12-16 12:51:46.710751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.097 Write completed with error (sct=0, sc=8) 00:30:21.097 starting I/O failed: -6 00:30:21.098 [2024-12-16 12:51:46.718999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.098 NVMe io qpair process completion error 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 [2024-12-16 12:51:46.719973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 [2024-12-16 12:51:46.721035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 [2024-12-16 12:51:46.722205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.098 Write completed with error (sct=0, sc=8) 00:30:21.098 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 [2024-12-16 12:51:46.724277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.099 NVMe io qpair process completion error 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 [2024-12-16 12:51:46.725406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.099 Write completed with error (sct=0, sc=8) 00:30:21.099 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 [2024-12-16 12:51:46.726461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 [2024-12-16 12:51:46.727653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 Write completed with error (sct=0, sc=8) 00:30:21.100 starting I/O failed: -6 00:30:21.100 [2024-12-16 12:51:46.729382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.101 NVMe io qpair process completion error 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 [2024-12-16 12:51:46.730596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 [2024-12-16 12:51:46.731598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 [2024-12-16 12:51:46.732623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.101 Write completed with error (sct=0, sc=8) 00:30:21.101 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 [2024-12-16 12:51:46.742846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.102 NVMe io qpair process completion error 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 [2024-12-16 12:51:46.744204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 [2024-12-16 12:51:46.745238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.102 Write completed with error (sct=0, sc=8) 00:30:21.102 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 [2024-12-16 12:51:46.746383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 [2024-12-16 12:51:46.748464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.103 NVMe io qpair process completion error 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 starting I/O failed: -6 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.103 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 [2024-12-16 12:51:46.749625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 [2024-12-16 12:51:46.750655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 [2024-12-16 12:51:46.751725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.104 starting I/O failed: -6 00:30:21.104 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 Write completed with error (sct=0, sc=8) 00:30:21.105 starting I/O failed: -6 00:30:21.105 [2024-12-16 12:51:46.764823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.105 NVMe io qpair process completion error 00:30:21.105 Initializing NVMe Controllers 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:21.105 Controller IO queue size 128, less than required. 00:30:21.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:21.105 Initialization complete. Launching workers. 00:30:21.105 ======================================================== 00:30:21.105 Latency(us) 00:30:21.105 Device Information : IOPS MiB/s Average min max 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2164.20 92.99 59153.74 1106.90 114101.16 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2190.50 94.12 58461.61 1024.26 116877.87 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2191.55 94.17 58558.28 640.07 107231.27 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2170.72 93.27 59166.40 829.33 131229.09 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2205.65 94.77 57431.87 929.13 104185.69 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2208.17 94.88 57376.30 543.54 102819.63 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2228.58 95.76 56860.28 888.48 101013.00 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2201.65 94.60 57575.37 506.93 99799.12 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2177.88 93.58 58219.33 805.02 102463.28 00:30:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2169.25 93.21 58544.12 1052.07 111230.13 00:30:21.105 ======================================================== 00:30:21.105 Total : 21908.17 941.37 58128.55 506.93 131229.09 00:30:21.105 00:30:21.105 [2024-12-16 12:51:46.768052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac8b50 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac81c0 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aca350 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac8820 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac9c40 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7b20 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aca020 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aca680 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac84f0 is same with the state(6) to be set 00:30:21.105 [2024-12-16 12:51:46.768410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7d00 is same with the state(6) to be set 00:30:21.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:21.105 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:30:21.105 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 477407 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:22.043 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.044 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:22.044 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.044 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.303 rmmod nvme_tcp 00:30:22.303 rmmod nvme_fabrics 00:30:22.303 rmmod nvme_keyring 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.303 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.209 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.209 00:30:24.209 real 0m9.787s 00:30:24.209 user 0m25.204s 00:30:24.209 sys 0m4.924s 00:30:24.209 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.209 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:24.209 ************************************ 00:30:24.209 END TEST nvmf_shutdown_tc4 00:30:24.209 ************************************ 00:30:24.468 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:30:24.468 00:30:24.468 real 0m40.376s 00:30:24.468 user 1m39.166s 00:30:24.468 sys 0m13.584s 00:30:24.468 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.468 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:24.468 ************************************ 00:30:24.468 END TEST nvmf_shutdown 00:30:24.468 ************************************ 00:30:24.468 12:51:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:24.468 00:30:24.468 real 18m23.582s 00:30:24.468 user 49m10.750s 00:30:24.468 sys 4m27.339s 00:30:24.468 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.468 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:24.468 ************************************ 00:30:24.468 END TEST nvmf_target_extra 00:30:24.468 ************************************ 00:30:24.468 12:51:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:24.468 12:51:50 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:24.468 12:51:50 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:24.468 12:51:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.468 ************************************ 00:30:24.468 START TEST nvmf_host 00:30:24.468 ************************************ 00:30:24.468 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:24.468 * Looking for test storage... 00:30:24.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:24.468 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:24.468 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:24.469 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.730 --rc genhtml_branch_coverage=1 00:30:24.730 --rc genhtml_function_coverage=1 00:30:24.730 --rc genhtml_legend=1 00:30:24.730 --rc geninfo_all_blocks=1 00:30:24.730 --rc geninfo_unexecuted_blocks=1 00:30:24.730 00:30:24.730 ' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.730 --rc genhtml_branch_coverage=1 00:30:24.730 --rc genhtml_function_coverage=1 00:30:24.730 --rc genhtml_legend=1 00:30:24.730 --rc geninfo_all_blocks=1 00:30:24.730 --rc geninfo_unexecuted_blocks=1 00:30:24.730 00:30:24.730 ' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.730 --rc genhtml_branch_coverage=1 00:30:24.730 --rc genhtml_function_coverage=1 00:30:24.730 --rc genhtml_legend=1 00:30:24.730 --rc geninfo_all_blocks=1 00:30:24.730 --rc geninfo_unexecuted_blocks=1 00:30:24.730 00:30:24.730 ' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.730 --rc genhtml_branch_coverage=1 00:30:24.730 --rc genhtml_function_coverage=1 00:30:24.730 --rc genhtml_legend=1 00:30:24.730 --rc geninfo_all_blocks=1 00:30:24.730 --rc geninfo_unexecuted_blocks=1 00:30:24.730 00:30:24.730 ' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:24.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.730 ************************************ 00:30:24.730 START TEST nvmf_multicontroller 00:30:24.730 ************************************ 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:24.730 * Looking for test storage... 00:30:24.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:24.730 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.731 --rc genhtml_branch_coverage=1 00:30:24.731 --rc genhtml_function_coverage=1 00:30:24.731 --rc genhtml_legend=1 00:30:24.731 --rc geninfo_all_blocks=1 00:30:24.731 --rc geninfo_unexecuted_blocks=1 00:30:24.731 00:30:24.731 ' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.731 --rc genhtml_branch_coverage=1 00:30:24.731 --rc genhtml_function_coverage=1 00:30:24.731 --rc genhtml_legend=1 00:30:24.731 --rc geninfo_all_blocks=1 00:30:24.731 --rc geninfo_unexecuted_blocks=1 00:30:24.731 00:30:24.731 ' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.731 --rc genhtml_branch_coverage=1 00:30:24.731 --rc genhtml_function_coverage=1 00:30:24.731 --rc genhtml_legend=1 00:30:24.731 --rc geninfo_all_blocks=1 00:30:24.731 --rc geninfo_unexecuted_blocks=1 00:30:24.731 00:30:24.731 ' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.731 --rc genhtml_branch_coverage=1 00:30:24.731 --rc genhtml_function_coverage=1 00:30:24.731 --rc genhtml_legend=1 00:30:24.731 --rc geninfo_all_blocks=1 00:30:24.731 --rc geninfo_unexecuted_blocks=1 00:30:24.731 00:30:24.731 ' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:24.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:24.731 12:51:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.305 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:31.306 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:31.306 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:31.306 Found net devices under 0000:af:00.0: cvl_0_0 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:31.306 Found net devices under 0000:af:00.1: cvl_0_1 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:30:31.306 00:30:31.306 --- 10.0.0.2 ping statistics --- 00:30:31.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.306 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:30:31.306 00:30:31.306 --- 10.0.0.1 ping statistics --- 00:30:31.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.306 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=481853 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 481853 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 481853 ']' 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:31.306 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.306 [2024-12-16 12:51:56.760510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:31.307 [2024-12-16 12:51:56.760554] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.307 [2024-12-16 12:51:56.831998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:31.307 [2024-12-16 12:51:56.871050] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.307 [2024-12-16 12:51:56.871092] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.307 [2024-12-16 12:51:56.871099] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.307 [2024-12-16 12:51:56.871105] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.307 [2024-12-16 12:51:56.871110] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.307 [2024-12-16 12:51:56.871227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.307 [2024-12-16 12:51:56.871338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.307 [2024-12-16 12:51:56.871339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.307 12:51:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 [2024-12-16 12:51:57.004625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 Malloc0 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 [2024-12-16 12:51:57.073249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 [2024-12-16 12:51:57.081177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 Malloc1 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=482073 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 482073 /var/tmp/bdevperf.sock 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 482073 ']' 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:31.307 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.567 NVMe0n1 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.567 1 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.567 request: 00:30:31.567 { 00:30:31.567 "name": "NVMe0", 00:30:31.567 "trtype": "tcp", 00:30:31.567 "traddr": "10.0.0.2", 00:30:31.567 "adrfam": "ipv4", 00:30:31.567 "trsvcid": "4420", 00:30:31.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.567 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:31.567 "hostaddr": "10.0.0.1", 00:30:31.567 "prchk_reftag": false, 00:30:31.567 "prchk_guard": false, 00:30:31.567 "hdgst": false, 00:30:31.567 "ddgst": false, 00:30:31.567 "allow_unrecognized_csi": false, 00:30:31.567 "method": "bdev_nvme_attach_controller", 00:30:31.567 "req_id": 1 00:30:31.567 } 00:30:31.567 Got JSON-RPC error response 00:30:31.567 response: 00:30:31.567 { 00:30:31.567 "code": -114, 00:30:31.567 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:31.567 } 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:31.567 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.568 request: 00:30:31.568 { 00:30:31.568 "name": "NVMe0", 00:30:31.568 "trtype": "tcp", 00:30:31.568 "traddr": "10.0.0.2", 00:30:31.568 "adrfam": "ipv4", 00:30:31.568 "trsvcid": "4420", 00:30:31.568 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:31.568 "hostaddr": "10.0.0.1", 00:30:31.568 "prchk_reftag": false, 00:30:31.568 "prchk_guard": false, 00:30:31.568 "hdgst": false, 00:30:31.568 "ddgst": false, 00:30:31.568 "allow_unrecognized_csi": false, 00:30:31.568 "method": "bdev_nvme_attach_controller", 00:30:31.568 "req_id": 1 00:30:31.568 } 00:30:31.568 Got JSON-RPC error response 00:30:31.568 response: 00:30:31.568 { 00:30:31.568 "code": -114, 00:30:31.568 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:31.568 } 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.568 request: 00:30:31.568 { 00:30:31.568 "name": "NVMe0", 00:30:31.568 "trtype": "tcp", 00:30:31.568 "traddr": "10.0.0.2", 00:30:31.568 "adrfam": "ipv4", 00:30:31.568 "trsvcid": "4420", 00:30:31.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.568 "hostaddr": "10.0.0.1", 00:30:31.568 "prchk_reftag": false, 00:30:31.568 "prchk_guard": false, 00:30:31.568 "hdgst": false, 00:30:31.568 "ddgst": false, 00:30:31.568 "multipath": "disable", 00:30:31.568 "allow_unrecognized_csi": false, 00:30:31.568 "method": "bdev_nvme_attach_controller", 00:30:31.568 "req_id": 1 00:30:31.568 } 00:30:31.568 Got JSON-RPC error response 00:30:31.568 response: 00:30:31.568 { 00:30:31.568 "code": -114, 00:30:31.568 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:31.568 } 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.568 request: 00:30:31.568 { 00:30:31.568 "name": "NVMe0", 00:30:31.568 "trtype": "tcp", 00:30:31.568 "traddr": "10.0.0.2", 00:30:31.568 "adrfam": "ipv4", 00:30:31.568 "trsvcid": "4420", 00:30:31.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.568 "hostaddr": "10.0.0.1", 00:30:31.568 "prchk_reftag": false, 00:30:31.568 "prchk_guard": false, 00:30:31.568 "hdgst": false, 00:30:31.568 "ddgst": false, 00:30:31.568 "multipath": "failover", 00:30:31.568 "allow_unrecognized_csi": false, 00:30:31.568 "method": "bdev_nvme_attach_controller", 00:30:31.568 "req_id": 1 00:30:31.568 } 00:30:31.568 Got JSON-RPC error response 00:30:31.568 response: 00:30:31.568 { 00:30:31.568 "code": -114, 00:30:31.568 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:31.568 } 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.568 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.827 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.827 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.086 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:32.086 12:51:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.024 { 00:30:33.024 "results": [ 00:30:33.024 { 00:30:33.024 "job": "NVMe0n1", 00:30:33.024 "core_mask": "0x1", 00:30:33.024 "workload": "write", 00:30:33.024 "status": "finished", 00:30:33.024 "queue_depth": 128, 00:30:33.024 "io_size": 4096, 00:30:33.024 "runtime": 1.006484, 00:30:33.024 "iops": 23724.172465732194, 00:30:33.024 "mibps": 92.67254869426638, 00:30:33.024 "io_failed": 0, 00:30:33.024 "io_timeout": 0, 00:30:33.024 "avg_latency_us": 5378.8555876499195, 00:30:33.024 "min_latency_us": 3151.9695238095237, 00:30:33.024 "max_latency_us": 8238.81142857143 00:30:33.024 } 00:30:33.024 ], 00:30:33.024 "core_count": 1 00:30:33.024 } 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 482073 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 482073 ']' 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 482073 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482073 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482073' 00:30:33.283 killing process with pid 482073 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 482073 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 482073 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.283 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:33.543 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:33.543 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:33.543 [2024-12-16 12:51:57.181916] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:33.543 [2024-12-16 12:51:57.181972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482073 ] 00:30:33.543 [2024-12-16 12:51:57.248450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.543 [2024-12-16 12:51:57.287261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.543 [2024-12-16 12:51:57.950139] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 55e68dce-118b-410b-89d4-a0bf54b57955 already exists 00:30:33.543 [2024-12-16 12:51:57.950167] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:55e68dce-118b-410b-89d4-a0bf54b57955 alias for bdev NVMe1n1 00:30:33.543 [2024-12-16 12:51:57.950175] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:33.544 Running I/O for 1 seconds... 00:30:33.544 23718.00 IOPS, 92.65 MiB/s 00:30:33.544 Latency(us) 00:30:33.544 [2024-12-16T11:51:59.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.544 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:33.544 NVMe0n1 : 1.01 23724.17 92.67 0.00 0.00 5378.86 3151.97 8238.81 00:30:33.544 [2024-12-16T11:51:59.611Z] =================================================================================================================== 00:30:33.544 [2024-12-16T11:51:59.611Z] Total : 23724.17 92.67 0.00 0.00 5378.86 3151.97 8238.81 00:30:33.544 Received shutdown signal, test time was about 1.000000 seconds 00:30:33.544 00:30:33.544 Latency(us) 00:30:33.544 [2024-12-16T11:51:59.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.544 [2024-12-16T11:51:59.611Z] =================================================================================================================== 00:30:33.544 [2024-12-16T11:51:59.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:33.544 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.544 rmmod nvme_tcp 00:30:33.544 rmmod nvme_fabrics 00:30:33.544 rmmod nvme_keyring 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 481853 ']' 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 481853 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 481853 ']' 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 481853 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481853 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481853' 00:30:33.544 killing process with pid 481853 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 481853 00:30:33.544 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 481853 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.804 12:51:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.340 12:52:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.340 00:30:36.340 real 0m11.209s 00:30:36.340 user 0m12.376s 00:30:36.341 sys 0m5.219s 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.341 ************************************ 00:30:36.341 END TEST nvmf_multicontroller 00:30:36.341 ************************************ 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.341 ************************************ 00:30:36.341 START TEST nvmf_aer 00:30:36.341 ************************************ 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:36.341 * Looking for test storage... 00:30:36.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.341 12:52:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.341 --rc genhtml_branch_coverage=1 00:30:36.341 --rc genhtml_function_coverage=1 00:30:36.341 --rc genhtml_legend=1 00:30:36.341 --rc geninfo_all_blocks=1 00:30:36.341 --rc geninfo_unexecuted_blocks=1 00:30:36.341 00:30:36.341 ' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.341 --rc genhtml_branch_coverage=1 00:30:36.341 --rc genhtml_function_coverage=1 00:30:36.341 --rc genhtml_legend=1 00:30:36.341 --rc geninfo_all_blocks=1 00:30:36.341 --rc geninfo_unexecuted_blocks=1 00:30:36.341 00:30:36.341 ' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.341 --rc genhtml_branch_coverage=1 00:30:36.341 --rc genhtml_function_coverage=1 00:30:36.341 --rc genhtml_legend=1 00:30:36.341 --rc geninfo_all_blocks=1 00:30:36.341 --rc geninfo_unexecuted_blocks=1 00:30:36.341 00:30:36.341 ' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.341 --rc genhtml_branch_coverage=1 00:30:36.341 --rc genhtml_function_coverage=1 00:30:36.341 --rc genhtml_legend=1 00:30:36.341 --rc geninfo_all_blocks=1 00:30:36.341 --rc geninfo_unexecuted_blocks=1 00:30:36.341 00:30:36.341 ' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.341 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.342 12:52:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:41.618 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.618 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:41.619 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:41.619 Found net devices under 0000:af:00.0: cvl_0_0 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:41.619 Found net devices under 0000:af:00.1: cvl_0_1 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.619 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:30:41.879 00:30:41.879 --- 10.0.0.2 ping statistics --- 00:30:41.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.879 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:30:41.879 00:30:41.879 --- 10.0.0.1 ping statistics --- 00:30:41.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.879 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=485778 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 485778 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 485778 ']' 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:41.879 12:52:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:41.879 [2024-12-16 12:52:07.943593] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:41.879 [2024-12-16 12:52:07.943643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.138 [2024-12-16 12:52:08.018498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.138 [2024-12-16 12:52:08.059788] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.139 [2024-12-16 12:52:08.059827] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.139 [2024-12-16 12:52:08.059834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.139 [2024-12-16 12:52:08.059841] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.139 [2024-12-16 12:52:08.059846] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.139 [2024-12-16 12:52:08.059891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.139 [2024-12-16 12:52:08.059998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.139 [2024-12-16 12:52:08.060107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.139 [2024-12-16 12:52:08.060108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.139 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.139 [2024-12-16 12:52:08.199837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.398 Malloc0 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.398 [2024-12-16 12:52:08.251272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.398 [ 00:30:42.398 { 00:30:42.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:42.398 "subtype": "Discovery", 00:30:42.398 "listen_addresses": [], 00:30:42.398 "allow_any_host": true, 00:30:42.398 "hosts": [] 00:30:42.398 }, 00:30:42.398 { 00:30:42.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.398 "subtype": "NVMe", 00:30:42.398 "listen_addresses": [ 00:30:42.398 { 00:30:42.398 "trtype": "TCP", 00:30:42.398 "adrfam": "IPv4", 00:30:42.398 "traddr": "10.0.0.2", 00:30:42.398 "trsvcid": "4420" 00:30:42.398 } 00:30:42.398 ], 00:30:42.398 "allow_any_host": true, 00:30:42.398 "hosts": [], 00:30:42.398 "serial_number": "SPDK00000000000001", 00:30:42.398 "model_number": "SPDK bdev Controller", 00:30:42.398 "max_namespaces": 2, 00:30:42.398 "min_cntlid": 1, 00:30:42.398 "max_cntlid": 65519, 00:30:42.398 "namespaces": [ 00:30:42.398 { 00:30:42.398 "nsid": 1, 00:30:42.398 "bdev_name": "Malloc0", 00:30:42.398 "name": "Malloc0", 00:30:42.398 "nguid": "0A21975A7D5D401EB1DC987AD1C0FEF8", 00:30:42.398 "uuid": "0a21975a-7d5d-401e-b1dc-987ad1c0fef8" 00:30:42.398 } 00:30:42.398 ] 00:30:42.398 } 00:30:42.398 ] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=485860 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:42.398 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:42.399 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.658 Malloc1 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.658 Asynchronous Event Request test 00:30:42.658 Attaching to 10.0.0.2 00:30:42.658 Attached to 10.0.0.2 00:30:42.658 Registering asynchronous event callbacks... 00:30:42.658 Starting namespace attribute notice tests for all controllers... 00:30:42.658 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:42.658 aer_cb - Changed Namespace 00:30:42.658 Cleaning up... 00:30:42.658 [ 00:30:42.658 { 00:30:42.658 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:42.658 "subtype": "Discovery", 00:30:42.658 "listen_addresses": [], 00:30:42.658 "allow_any_host": true, 00:30:42.658 "hosts": [] 00:30:42.658 }, 00:30:42.658 { 00:30:42.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.658 "subtype": "NVMe", 00:30:42.658 "listen_addresses": [ 00:30:42.658 { 00:30:42.658 "trtype": "TCP", 00:30:42.658 "adrfam": "IPv4", 00:30:42.658 "traddr": "10.0.0.2", 00:30:42.658 "trsvcid": "4420" 00:30:42.658 } 00:30:42.658 ], 00:30:42.658 "allow_any_host": true, 00:30:42.658 "hosts": [], 00:30:42.658 "serial_number": "SPDK00000000000001", 00:30:42.658 "model_number": "SPDK bdev Controller", 00:30:42.658 "max_namespaces": 2, 00:30:42.658 "min_cntlid": 1, 00:30:42.658 "max_cntlid": 65519, 00:30:42.658 "namespaces": [ 00:30:42.658 { 00:30:42.658 "nsid": 1, 00:30:42.658 "bdev_name": "Malloc0", 00:30:42.658 "name": "Malloc0", 00:30:42.658 "nguid": "0A21975A7D5D401EB1DC987AD1C0FEF8", 00:30:42.658 "uuid": "0a21975a-7d5d-401e-b1dc-987ad1c0fef8" 00:30:42.658 }, 00:30:42.658 { 00:30:42.658 "nsid": 2, 00:30:42.658 "bdev_name": "Malloc1", 00:30:42.658 "name": "Malloc1", 00:30:42.658 "nguid": "75DCDD5005BC4128851138F87498017F", 00:30:42.658 "uuid": "75dcdd50-05bc-4128-8511-38f87498017f" 00:30:42.658 } 00:30:42.658 ] 00:30:42.658 } 00:30:42.658 ] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 485860 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.658 rmmod nvme_tcp 00:30:42.658 rmmod nvme_fabrics 00:30:42.658 rmmod nvme_keyring 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 485778 ']' 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 485778 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 485778 ']' 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 485778 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:42.658 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 485778 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 485778' 00:30:42.918 killing process with pid 485778 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 485778 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 485778 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.918 12:52:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.455 12:52:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.455 00:30:45.455 real 0m9.162s 00:30:45.455 user 0m5.066s 00:30:45.455 sys 0m4.827s 00:30:45.455 12:52:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.455 12:52:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:45.455 ************************************ 00:30:45.455 END TEST nvmf_aer 00:30:45.455 ************************************ 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.455 ************************************ 00:30:45.455 START TEST nvmf_async_init 00:30:45.455 ************************************ 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:45.455 * Looking for test storage... 00:30:45.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.455 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.456 --rc genhtml_branch_coverage=1 00:30:45.456 --rc genhtml_function_coverage=1 00:30:45.456 --rc genhtml_legend=1 00:30:45.456 --rc geninfo_all_blocks=1 00:30:45.456 --rc geninfo_unexecuted_blocks=1 00:30:45.456 00:30:45.456 ' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.456 --rc genhtml_branch_coverage=1 00:30:45.456 --rc genhtml_function_coverage=1 00:30:45.456 --rc genhtml_legend=1 00:30:45.456 --rc geninfo_all_blocks=1 00:30:45.456 --rc geninfo_unexecuted_blocks=1 00:30:45.456 00:30:45.456 ' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.456 --rc genhtml_branch_coverage=1 00:30:45.456 --rc genhtml_function_coverage=1 00:30:45.456 --rc genhtml_legend=1 00:30:45.456 --rc geninfo_all_blocks=1 00:30:45.456 --rc geninfo_unexecuted_blocks=1 00:30:45.456 00:30:45.456 ' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.456 --rc genhtml_branch_coverage=1 00:30:45.456 --rc genhtml_function_coverage=1 00:30:45.456 --rc genhtml_legend=1 00:30:45.456 --rc geninfo_all_blocks=1 00:30:45.456 --rc geninfo_unexecuted_blocks=1 00:30:45.456 00:30:45.456 ' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:45.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=61d77d81041545e6816ff412599df6ee 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:45.456 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.457 12:52:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:50.973 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:50.973 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:50.973 Found net devices under 0000:af:00.0: cvl_0_0 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:50.973 Found net devices under 0000:af:00.1: cvl_0_1 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.973 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.974 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.974 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.974 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.974 12:52:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.974 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.974 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.974 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:30:50.974 00:30:50.974 --- 10.0.0.2 ping statistics --- 00:30:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.974 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:30:50.974 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:30:50.974 00:30:50.974 --- 10.0.0.1 ping statistics --- 00:30:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.974 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:51.250 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.250 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:30:51.250 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=489392 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 489392 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 489392 ']' 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.251 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.251 [2024-12-16 12:52:17.129296] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:51.251 [2024-12-16 12:52:17.129349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.251 [2024-12-16 12:52:17.202737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.251 [2024-12-16 12:52:17.242422] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.251 [2024-12-16 12:52:17.242462] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.251 [2024-12-16 12:52:17.242469] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.251 [2024-12-16 12:52:17.242475] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.251 [2024-12-16 12:52:17.242480] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.251 [2024-12-16 12:52:17.242499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 [2024-12-16 12:52:17.367146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 null0 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 61d77d81041545e6816ff412599df6ee 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.525 [2024-12-16 12:52:17.415389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.525 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.801 nvme0n1 00:30:51.801 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.801 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:51.801 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.801 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.801 [ 00:30:51.801 { 00:30:51.801 "name": "nvme0n1", 00:30:51.801 "aliases": [ 00:30:51.801 "61d77d81-0415-45e6-816f-f412599df6ee" 00:30:51.801 ], 00:30:51.801 "product_name": "NVMe disk", 00:30:51.801 "block_size": 512, 00:30:51.801 "num_blocks": 2097152, 00:30:51.801 "uuid": "61d77d81-0415-45e6-816f-f412599df6ee", 00:30:51.801 "numa_id": 1, 00:30:51.801 "assigned_rate_limits": { 00:30:51.801 "rw_ios_per_sec": 0, 00:30:51.801 "rw_mbytes_per_sec": 0, 00:30:51.801 "r_mbytes_per_sec": 0, 00:30:51.801 "w_mbytes_per_sec": 0 00:30:51.801 }, 00:30:51.801 "claimed": false, 00:30:51.801 "zoned": false, 00:30:51.801 "supported_io_types": { 00:30:51.801 "read": true, 00:30:51.801 "write": true, 00:30:51.802 "unmap": false, 00:30:51.802 "flush": true, 00:30:51.802 "reset": true, 00:30:51.802 "nvme_admin": true, 00:30:51.802 "nvme_io": true, 00:30:51.802 "nvme_io_md": false, 00:30:51.802 "write_zeroes": true, 00:30:51.802 "zcopy": false, 00:30:51.802 "get_zone_info": false, 00:30:51.802 "zone_management": false, 00:30:51.802 "zone_append": false, 00:30:51.802 "compare": true, 00:30:51.802 "compare_and_write": true, 00:30:51.802 "abort": true, 00:30:51.802 "seek_hole": false, 00:30:51.802 "seek_data": false, 00:30:51.802 "copy": true, 00:30:51.802 "nvme_iov_md": false 00:30:51.802 }, 00:30:51.802 "memory_domains": [ 00:30:51.802 { 00:30:51.802 "dma_device_id": "system", 00:30:51.802 "dma_device_type": 1 00:30:51.802 } 00:30:51.802 ], 00:30:51.802 "driver_specific": { 00:30:51.802 "nvme": [ 00:30:51.802 { 00:30:51.802 "trid": { 00:30:51.802 "trtype": "TCP", 00:30:51.802 "adrfam": "IPv4", 00:30:51.802 "traddr": "10.0.0.2", 00:30:51.802 "trsvcid": "4420", 00:30:51.802 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:51.802 }, 00:30:51.802 "ctrlr_data": { 00:30:51.802 "cntlid": 1, 00:30:51.802 "vendor_id": "0x8086", 00:30:51.802 "model_number": "SPDK bdev Controller", 00:30:51.802 "serial_number": "00000000000000000000", 00:30:51.802 "firmware_revision": "24.09.1", 00:30:51.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.802 "oacs": { 00:30:51.802 "security": 0, 00:30:51.802 "format": 0, 00:30:51.802 "firmware": 0, 00:30:51.802 "ns_manage": 0 00:30:51.802 }, 00:30:51.802 "multi_ctrlr": true, 00:30:51.802 "ana_reporting": false 00:30:51.802 }, 00:30:51.802 "vs": { 00:30:51.802 "nvme_version": "1.3" 00:30:51.802 }, 00:30:51.802 "ns_data": { 00:30:51.802 "id": 1, 00:30:51.802 "can_share": true 00:30:51.802 } 00:30:51.802 } 00:30:51.802 ], 00:30:51.802 "mp_policy": "active_passive" 00:30:51.802 } 00:30:51.802 } 00:30:51.802 ] 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 [2024-12-16 12:52:17.675917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:51.802 [2024-12-16 12:52:17.675971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1c660 (9): Bad file descriptor 00:30:51.802 [2024-12-16 12:52:17.808193] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 [ 00:30:51.802 { 00:30:51.802 "name": "nvme0n1", 00:30:51.802 "aliases": [ 00:30:51.802 "61d77d81-0415-45e6-816f-f412599df6ee" 00:30:51.802 ], 00:30:51.802 "product_name": "NVMe disk", 00:30:51.802 "block_size": 512, 00:30:51.802 "num_blocks": 2097152, 00:30:51.802 "uuid": "61d77d81-0415-45e6-816f-f412599df6ee", 00:30:51.802 "numa_id": 1, 00:30:51.802 "assigned_rate_limits": { 00:30:51.802 "rw_ios_per_sec": 0, 00:30:51.802 "rw_mbytes_per_sec": 0, 00:30:51.802 "r_mbytes_per_sec": 0, 00:30:51.802 "w_mbytes_per_sec": 0 00:30:51.802 }, 00:30:51.802 "claimed": false, 00:30:51.802 "zoned": false, 00:30:51.802 "supported_io_types": { 00:30:51.802 "read": true, 00:30:51.802 "write": true, 00:30:51.802 "unmap": false, 00:30:51.802 "flush": true, 00:30:51.802 "reset": true, 00:30:51.802 "nvme_admin": true, 00:30:51.802 "nvme_io": true, 00:30:51.802 "nvme_io_md": false, 00:30:51.802 "write_zeroes": true, 00:30:51.802 "zcopy": false, 00:30:51.802 "get_zone_info": false, 00:30:51.802 "zone_management": false, 00:30:51.802 "zone_append": false, 00:30:51.802 "compare": true, 00:30:51.802 "compare_and_write": true, 00:30:51.802 "abort": true, 00:30:51.802 "seek_hole": false, 00:30:51.802 "seek_data": false, 00:30:51.802 "copy": true, 00:30:51.802 "nvme_iov_md": false 00:30:51.802 }, 00:30:51.802 "memory_domains": [ 00:30:51.802 { 00:30:51.802 "dma_device_id": "system", 00:30:51.802 "dma_device_type": 1 00:30:51.802 } 00:30:51.802 ], 00:30:51.802 "driver_specific": { 00:30:51.802 "nvme": [ 00:30:51.802 { 00:30:51.802 "trid": { 00:30:51.802 "trtype": "TCP", 00:30:51.802 "adrfam": "IPv4", 00:30:51.802 "traddr": "10.0.0.2", 00:30:51.802 "trsvcid": "4420", 00:30:51.802 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:51.802 }, 00:30:51.802 "ctrlr_data": { 00:30:51.802 "cntlid": 2, 00:30:51.802 "vendor_id": "0x8086", 00:30:51.802 "model_number": "SPDK bdev Controller", 00:30:51.802 "serial_number": "00000000000000000000", 00:30:51.802 "firmware_revision": "24.09.1", 00:30:51.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.802 "oacs": { 00:30:51.802 "security": 0, 00:30:51.802 "format": 0, 00:30:51.802 "firmware": 0, 00:30:51.802 "ns_manage": 0 00:30:51.802 }, 00:30:51.802 "multi_ctrlr": true, 00:30:51.802 "ana_reporting": false 00:30:51.802 }, 00:30:51.802 "vs": { 00:30:51.802 "nvme_version": "1.3" 00:30:51.802 }, 00:30:51.802 "ns_data": { 00:30:51.802 "id": 1, 00:30:51.802 "can_share": true 00:30:51.802 } 00:30:51.802 } 00:30:51.802 ], 00:30:51.802 "mp_policy": "active_passive" 00:30:51.802 } 00:30:51.802 } 00:30:51.802 ] 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.G8lRJYvSjn 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.G8lRJYvSjn 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.G8lRJYvSjn 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.802 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:52.114 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.115 [2024-12-16 12:52:17.876537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:52.115 [2024-12-16 12:52:17.876622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.115 [2024-12-16 12:52:17.900612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:52.115 nvme0n1 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.115 [ 00:30:52.115 { 00:30:52.115 "name": "nvme0n1", 00:30:52.115 "aliases": [ 00:30:52.115 "61d77d81-0415-45e6-816f-f412599df6ee" 00:30:52.115 ], 00:30:52.115 "product_name": "NVMe disk", 00:30:52.115 "block_size": 512, 00:30:52.115 "num_blocks": 2097152, 00:30:52.115 "uuid": "61d77d81-0415-45e6-816f-f412599df6ee", 00:30:52.115 "numa_id": 1, 00:30:52.115 "assigned_rate_limits": { 00:30:52.115 "rw_ios_per_sec": 0, 00:30:52.115 "rw_mbytes_per_sec": 0, 00:30:52.115 "r_mbytes_per_sec": 0, 00:30:52.115 "w_mbytes_per_sec": 0 00:30:52.115 }, 00:30:52.115 "claimed": false, 00:30:52.115 "zoned": false, 00:30:52.115 "supported_io_types": { 00:30:52.115 "read": true, 00:30:52.115 "write": true, 00:30:52.115 "unmap": false, 00:30:52.115 "flush": true, 00:30:52.115 "reset": true, 00:30:52.115 "nvme_admin": true, 00:30:52.115 "nvme_io": true, 00:30:52.115 "nvme_io_md": false, 00:30:52.115 "write_zeroes": true, 00:30:52.115 "zcopy": false, 00:30:52.115 "get_zone_info": false, 00:30:52.115 "zone_management": false, 00:30:52.115 "zone_append": false, 00:30:52.115 "compare": true, 00:30:52.115 "compare_and_write": true, 00:30:52.115 "abort": true, 00:30:52.115 "seek_hole": false, 00:30:52.115 "seek_data": false, 00:30:52.115 "copy": true, 00:30:52.115 "nvme_iov_md": false 00:30:52.115 }, 00:30:52.115 "memory_domains": [ 00:30:52.115 { 00:30:52.115 "dma_device_id": "system", 00:30:52.115 "dma_device_type": 1 00:30:52.115 } 00:30:52.115 ], 00:30:52.115 "driver_specific": { 00:30:52.115 "nvme": [ 00:30:52.115 { 00:30:52.115 "trid": { 00:30:52.115 "trtype": "TCP", 00:30:52.115 "adrfam": "IPv4", 00:30:52.115 "traddr": "10.0.0.2", 00:30:52.115 "trsvcid": "4421", 00:30:52.115 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:52.115 }, 00:30:52.115 "ctrlr_data": { 00:30:52.115 "cntlid": 3, 00:30:52.115 "vendor_id": "0x8086", 00:30:52.115 "model_number": "SPDK bdev Controller", 00:30:52.115 "serial_number": "00000000000000000000", 00:30:52.115 "firmware_revision": "24.09.1", 00:30:52.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.115 "oacs": { 00:30:52.115 "security": 0, 00:30:52.115 "format": 0, 00:30:52.115 "firmware": 0, 00:30:52.115 "ns_manage": 0 00:30:52.115 }, 00:30:52.115 "multi_ctrlr": true, 00:30:52.115 "ana_reporting": false 00:30:52.115 }, 00:30:52.115 "vs": { 00:30:52.115 "nvme_version": "1.3" 00:30:52.115 }, 00:30:52.115 "ns_data": { 00:30:52.115 "id": 1, 00:30:52.115 "can_share": true 00:30:52.115 } 00:30:52.115 } 00:30:52.115 ], 00:30:52.115 "mp_policy": "active_passive" 00:30:52.115 } 00:30:52.115 } 00:30:52.115 ] 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.115 12:52:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.G8lRJYvSjn 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.115 rmmod nvme_tcp 00:30:52.115 rmmod nvme_fabrics 00:30:52.115 rmmod nvme_keyring 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 489392 ']' 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 489392 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 489392 ']' 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 489392 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 489392 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 489392' 00:30:52.115 killing process with pid 489392 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 489392 00:30:52.115 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 489392 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.420 12:52:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.328 12:52:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.328 00:30:54.328 real 0m9.355s 00:30:54.328 user 0m3.067s 00:30:54.328 sys 0m4.688s 00:30:54.328 12:52:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:54.328 12:52:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.328 ************************************ 00:30:54.328 END TEST nvmf_async_init 00:30:54.328 ************************************ 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.588 ************************************ 00:30:54.588 START TEST dma 00:30:54.588 ************************************ 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:54.588 * Looking for test storage... 00:30:54.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.588 --rc genhtml_branch_coverage=1 00:30:54.588 --rc genhtml_function_coverage=1 00:30:54.588 --rc genhtml_legend=1 00:30:54.588 --rc geninfo_all_blocks=1 00:30:54.588 --rc geninfo_unexecuted_blocks=1 00:30:54.588 00:30:54.588 ' 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.588 --rc genhtml_branch_coverage=1 00:30:54.588 --rc genhtml_function_coverage=1 00:30:54.588 --rc genhtml_legend=1 00:30:54.588 --rc geninfo_all_blocks=1 00:30:54.588 --rc geninfo_unexecuted_blocks=1 00:30:54.588 00:30:54.588 ' 00:30:54.588 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.589 --rc genhtml_branch_coverage=1 00:30:54.589 --rc genhtml_function_coverage=1 00:30:54.589 --rc genhtml_legend=1 00:30:54.589 --rc geninfo_all_blocks=1 00:30:54.589 --rc geninfo_unexecuted_blocks=1 00:30:54.589 00:30:54.589 ' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:54.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.589 --rc genhtml_branch_coverage=1 00:30:54.589 --rc genhtml_function_coverage=1 00:30:54.589 --rc genhtml_legend=1 00:30:54.589 --rc geninfo_all_blocks=1 00:30:54.589 --rc geninfo_unexecuted_blocks=1 00:30:54.589 00:30:54.589 ' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:54.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:54.589 00:30:54.589 real 0m0.208s 00:30:54.589 user 0m0.122s 00:30:54.589 sys 0m0.100s 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:54.589 12:52:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:54.589 ************************************ 00:30:54.589 END TEST dma 00:30:54.589 ************************************ 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.849 ************************************ 00:30:54.849 START TEST nvmf_identify 00:30:54.849 ************************************ 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:54.849 * Looking for test storage... 00:30:54.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:54.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.849 --rc genhtml_branch_coverage=1 00:30:54.849 --rc genhtml_function_coverage=1 00:30:54.849 --rc genhtml_legend=1 00:30:54.849 --rc geninfo_all_blocks=1 00:30:54.849 --rc geninfo_unexecuted_blocks=1 00:30:54.849 00:30:54.849 ' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:54.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.849 --rc genhtml_branch_coverage=1 00:30:54.849 --rc genhtml_function_coverage=1 00:30:54.849 --rc genhtml_legend=1 00:30:54.849 --rc geninfo_all_blocks=1 00:30:54.849 --rc geninfo_unexecuted_blocks=1 00:30:54.849 00:30:54.849 ' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:54.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.849 --rc genhtml_branch_coverage=1 00:30:54.849 --rc genhtml_function_coverage=1 00:30:54.849 --rc genhtml_legend=1 00:30:54.849 --rc geninfo_all_blocks=1 00:30:54.849 --rc geninfo_unexecuted_blocks=1 00:30:54.849 00:30:54.849 ' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:54.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.849 --rc genhtml_branch_coverage=1 00:30:54.849 --rc genhtml_function_coverage=1 00:30:54.849 --rc genhtml_legend=1 00:30:54.849 --rc geninfo_all_blocks=1 00:30:54.849 --rc geninfo_unexecuted_blocks=1 00:30:54.849 00:30:54.849 ' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.849 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:54.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.850 12:52:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:01.423 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:01.423 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:01.423 Found net devices under 0000:af:00.0: cvl_0_0 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:01.423 Found net devices under 0000:af:00.1: cvl_0_1 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:31:01.423 00:31:01.423 --- 10.0.0.2 ping statistics --- 00:31:01.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.423 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:31:01.423 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:31:01.423 00:31:01.423 --- 10.0.0.1 ping statistics --- 00:31:01.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.424 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=493054 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 493054 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 493054 ']' 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:01.424 12:52:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 [2024-12-16 12:52:26.850124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:01.424 [2024-12-16 12:52:26.850166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.424 [2024-12-16 12:52:26.923182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:01.424 [2024-12-16 12:52:26.964934] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.424 [2024-12-16 12:52:26.964973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.424 [2024-12-16 12:52:26.964980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.424 [2024-12-16 12:52:26.964987] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.424 [2024-12-16 12:52:26.964991] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.424 [2024-12-16 12:52:26.965069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.424 [2024-12-16 12:52:26.965175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.424 [2024-12-16 12:52:26.965282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.424 [2024-12-16 12:52:26.965283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 [2024-12-16 12:52:27.069602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 Malloc0 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 [2024-12-16 12:52:27.153144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.424 [ 00:31:01.424 { 00:31:01.424 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:01.424 "subtype": "Discovery", 00:31:01.424 "listen_addresses": [ 00:31:01.424 { 00:31:01.424 "trtype": "TCP", 00:31:01.424 "adrfam": "IPv4", 00:31:01.424 "traddr": "10.0.0.2", 00:31:01.424 "trsvcid": "4420" 00:31:01.424 } 00:31:01.424 ], 00:31:01.424 "allow_any_host": true, 00:31:01.424 "hosts": [] 00:31:01.424 }, 00:31:01.424 { 00:31:01.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.424 "subtype": "NVMe", 00:31:01.424 "listen_addresses": [ 00:31:01.424 { 00:31:01.424 "trtype": "TCP", 00:31:01.424 "adrfam": "IPv4", 00:31:01.424 "traddr": "10.0.0.2", 00:31:01.424 "trsvcid": "4420" 00:31:01.424 } 00:31:01.424 ], 00:31:01.424 "allow_any_host": true, 00:31:01.424 "hosts": [], 00:31:01.424 "serial_number": "SPDK00000000000001", 00:31:01.424 "model_number": "SPDK bdev Controller", 00:31:01.424 "max_namespaces": 32, 00:31:01.424 "min_cntlid": 1, 00:31:01.424 "max_cntlid": 65519, 00:31:01.424 "namespaces": [ 00:31:01.424 { 00:31:01.424 "nsid": 1, 00:31:01.424 "bdev_name": "Malloc0", 00:31:01.424 "name": "Malloc0", 00:31:01.424 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:01.424 "eui64": "ABCDEF0123456789", 00:31:01.424 "uuid": "63b2c0ad-e184-4be7-84dd-57d6e899e1f3" 00:31:01.424 } 00:31:01.424 ] 00:31:01.424 } 00:31:01.424 ] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.424 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:01.424 [2024-12-16 12:52:27.200259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:01.424 [2024-12-16 12:52:27.200293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493163 ] 00:31:01.424 [2024-12-16 12:52:27.226798] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:01.424 [2024-12-16 12:52:27.226846] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:01.424 [2024-12-16 12:52:27.226850] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:01.424 [2024-12-16 12:52:27.226861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:01.424 [2024-12-16 12:52:27.226870] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:01.424 [2024-12-16 12:52:27.230344] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:01.424 [2024-12-16 12:52:27.230381] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x103a0d0 0 00:31:01.424 [2024-12-16 12:52:27.238125] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:01.424 [2024-12-16 12:52:27.238140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:01.424 [2024-12-16 12:52:27.238144] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:01.424 [2024-12-16 12:52:27.238147] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:01.424 [2024-12-16 12:52:27.238177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-12-16 12:52:27.238182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-12-16 12:52:27.238186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.424 [2024-12-16 12:52:27.238198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:01.424 [2024-12-16 12:52:27.238216] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.424 [2024-12-16 12:52:27.246127] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-12-16 12:52:27.246135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.246138] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246143] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.246155] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:01.425 [2024-12-16 12:52:27.246161] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:01.425 [2024-12-16 12:52:27.246166] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:01.425 [2024-12-16 12:52:27.246179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.246195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.246208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.246366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.246372] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.246375] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246378] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.246383] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:01.425 [2024-12-16 12:52:27.246389] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:01.425 [2024-12-16 12:52:27.246395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246402] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.246407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.246417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.246513] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.246519] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.246522] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.246530] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:01.425 [2024-12-16 12:52:27.246537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:01.425 [2024-12-16 12:52:27.246543] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.246555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.246564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.246663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.246669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.246672] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246675] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.246680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:01.425 [2024-12-16 12:52:27.246687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246691] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246694] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.246700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.246709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.246771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.246777] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.246780] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246783] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.246787] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:01.425 [2024-12-16 12:52:27.246791] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:01.425 [2024-12-16 12:52:27.246798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:01.425 [2024-12-16 12:52:27.246903] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:01.425 [2024-12-16 12:52:27.246907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:01.425 [2024-12-16 12:52:27.246915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246918] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.246921] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.246927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.246936] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.247051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.247057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.247059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247063] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.247067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:01.425 [2024-12-16 12:52:27.247075] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247078] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.247087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.247096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.247203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.247209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.247212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.247220] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:01.425 [2024-12-16 12:52:27.247224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:01.425 [2024-12-16 12:52:27.247230] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:01.425 [2024-12-16 12:52:27.247241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:01.425 [2024-12-16 12:52:27.247251] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247254] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.425 [2024-12-16 12:52:27.247260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-12-16 12:52:27.247269] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.425 [2024-12-16 12:52:27.247369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.425 [2024-12-16 12:52:27.247375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.425 [2024-12-16 12:52:27.247378] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247382] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103a0d0): datao=0, datal=4096, cccid=0 00:31:01.425 [2024-12-16 12:52:27.247386] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a4540) on tqpair(0x103a0d0): expected_datao=0, payload_size=4096 00:31:01.425 [2024-12-16 12:52:27.247390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247403] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.247408] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.289207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-12-16 12:52:27.289219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-12-16 12:52:27.289223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.289227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.425 [2024-12-16 12:52:27.289234] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:01.425 [2024-12-16 12:52:27.289239] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:01.425 [2024-12-16 12:52:27.289243] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:01.425 [2024-12-16 12:52:27.289248] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:01.425 [2024-12-16 12:52:27.289252] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:01.425 [2024-12-16 12:52:27.289256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:01.425 [2024-12-16 12:52:27.289265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:01.425 [2024-12-16 12:52:27.289272] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.289276] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-12-16 12:52:27.289280] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:01.426 [2024-12-16 12:52:27.289298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.426 [2024-12-16 12:52:27.289404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.426 [2024-12-16 12:52:27.289410] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.426 [2024-12-16 12:52:27.289413] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289417] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.426 [2024-12-16 12:52:27.289423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.426 [2024-12-16 12:52:27.289443] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.426 [2024-12-16 12:52:27.289459] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.426 [2024-12-16 12:52:27.289476] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289479] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289482] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.426 [2024-12-16 12:52:27.289491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:01.426 [2024-12-16 12:52:27.289502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:01.426 [2024-12-16 12:52:27.289508] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289511] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.426 [2024-12-16 12:52:27.289528] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4540, cid 0, qid 0 00:31:01.426 [2024-12-16 12:52:27.289533] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a46c0, cid 1, qid 0 00:31:01.426 [2024-12-16 12:52:27.289537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4840, cid 2, qid 0 00:31:01.426 [2024-12-16 12:52:27.289541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.426 [2024-12-16 12:52:27.289545] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4b40, cid 4, qid 0 00:31:01.426 [2024-12-16 12:52:27.289643] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.426 [2024-12-16 12:52:27.289648] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.426 [2024-12-16 12:52:27.289651] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4b40) on tqpair=0x103a0d0 00:31:01.426 [2024-12-16 12:52:27.289659] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:01.426 [2024-12-16 12:52:27.289664] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:01.426 [2024-12-16 12:52:27.289673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.426 [2024-12-16 12:52:27.289694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4b40, cid 4, qid 0 00:31:01.426 [2024-12-16 12:52:27.289815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.426 [2024-12-16 12:52:27.289821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.426 [2024-12-16 12:52:27.289824] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289827] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103a0d0): datao=0, datal=4096, cccid=4 00:31:01.426 [2024-12-16 12:52:27.289831] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a4b40) on tqpair(0x103a0d0): expected_datao=0, payload_size=4096 00:31:01.426 [2024-12-16 12:52:27.289835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289841] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289844] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.426 [2024-12-16 12:52:27.289881] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.426 [2024-12-16 12:52:27.289884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4b40) on tqpair=0x103a0d0 00:31:01.426 [2024-12-16 12:52:27.289899] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:01.426 [2024-12-16 12:52:27.289923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289927] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.426 [2024-12-16 12:52:27.289938] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289942] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.289945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.289950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.426 [2024-12-16 12:52:27.289961] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4b40, cid 4, qid 0 00:31:01.426 [2024-12-16 12:52:27.289966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4cc0, cid 5, qid 0 00:31:01.426 [2024-12-16 12:52:27.290081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.426 [2024-12-16 12:52:27.290087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.426 [2024-12-16 12:52:27.290090] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.290094] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103a0d0): datao=0, datal=1024, cccid=4 00:31:01.426 [2024-12-16 12:52:27.290097] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a4b40) on tqpair(0x103a0d0): expected_datao=0, payload_size=1024 00:31:01.426 [2024-12-16 12:52:27.290101] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.290107] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.290110] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.294121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.426 [2024-12-16 12:52:27.294127] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.426 [2024-12-16 12:52:27.294130] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.294133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4cc0) on tqpair=0x103a0d0 00:31:01.426 [2024-12-16 12:52:27.334124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.426 [2024-12-16 12:52:27.334132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.426 [2024-12-16 12:52:27.334135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.334138] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4b40) on tqpair=0x103a0d0 00:31:01.426 [2024-12-16 12:52:27.334148] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.334151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.334158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.426 [2024-12-16 12:52:27.334174] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4b40, cid 4, qid 0 00:31:01.426 [2024-12-16 12:52:27.334342] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.426 [2024-12-16 12:52:27.334348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.426 [2024-12-16 12:52:27.334351] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.334354] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103a0d0): datao=0, datal=3072, cccid=4 00:31:01.426 [2024-12-16 12:52:27.334358] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a4b40) on tqpair(0x103a0d0): expected_datao=0, payload_size=3072 00:31:01.426 [2024-12-16 12:52:27.334362] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.334393] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.334397] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.375318] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.426 [2024-12-16 12:52:27.375327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.426 [2024-12-16 12:52:27.375330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.375333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4b40) on tqpair=0x103a0d0 00:31:01.426 [2024-12-16 12:52:27.375341] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.375344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103a0d0) 00:31:01.426 [2024-12-16 12:52:27.375351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.426 [2024-12-16 12:52:27.375365] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a4b40, cid 4, qid 0 00:31:01.426 [2024-12-16 12:52:27.375437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.426 [2024-12-16 12:52:27.375442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.426 [2024-12-16 12:52:27.375445] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-12-16 12:52:27.375448] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103a0d0): datao=0, datal=8, cccid=4 00:31:01.426 [2024-12-16 12:52:27.375452] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a4b40) on tqpair(0x103a0d0): expected_datao=0, payload_size=8 00:31:01.427 [2024-12-16 12:52:27.375456] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.427 [2024-12-16 12:52:27.375462] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.427 [2024-12-16 12:52:27.375465] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.427 [2024-12-16 12:52:27.417251] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.427 [2024-12-16 12:52:27.417261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.427 [2024-12-16 12:52:27.417264] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.427 [2024-12-16 12:52:27.417268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4b40) on tqpair=0x103a0d0 00:31:01.427 ===================================================== 00:31:01.427 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:01.427 ===================================================== 00:31:01.427 Controller Capabilities/Features 00:31:01.427 ================================ 00:31:01.427 Vendor ID: 0000 00:31:01.427 Subsystem Vendor ID: 0000 00:31:01.427 Serial Number: .................... 00:31:01.427 Model Number: ........................................ 00:31:01.427 Firmware Version: 24.09.1 00:31:01.427 Recommended Arb Burst: 0 00:31:01.427 IEEE OUI Identifier: 00 00 00 00:31:01.427 Multi-path I/O 00:31:01.427 May have multiple subsystem ports: No 00:31:01.427 May have multiple controllers: No 00:31:01.427 Associated with SR-IOV VF: No 00:31:01.427 Max Data Transfer Size: 131072 00:31:01.427 Max Number of Namespaces: 0 00:31:01.427 Max Number of I/O Queues: 1024 00:31:01.427 NVMe Specification Version (VS): 1.3 00:31:01.427 NVMe Specification Version (Identify): 1.3 00:31:01.427 Maximum Queue Entries: 128 00:31:01.427 Contiguous Queues Required: Yes 00:31:01.427 Arbitration Mechanisms Supported 00:31:01.427 Weighted Round Robin: Not Supported 00:31:01.427 Vendor Specific: Not Supported 00:31:01.427 Reset Timeout: 15000 ms 00:31:01.427 Doorbell Stride: 4 bytes 00:31:01.427 NVM Subsystem Reset: Not Supported 00:31:01.427 Command Sets Supported 00:31:01.427 NVM Command Set: Supported 00:31:01.427 Boot Partition: Not Supported 00:31:01.427 Memory Page Size Minimum: 4096 bytes 00:31:01.427 Memory Page Size Maximum: 4096 bytes 00:31:01.427 Persistent Memory Region: Not Supported 00:31:01.427 Optional Asynchronous Events Supported 00:31:01.427 Namespace Attribute Notices: Not Supported 00:31:01.427 Firmware Activation Notices: Not Supported 00:31:01.427 ANA Change Notices: Not Supported 00:31:01.427 PLE Aggregate Log Change Notices: Not Supported 00:31:01.427 LBA Status Info Alert Notices: Not Supported 00:31:01.427 EGE Aggregate Log Change Notices: Not Supported 00:31:01.427 Normal NVM Subsystem Shutdown event: Not Supported 00:31:01.427 Zone Descriptor Change Notices: Not Supported 00:31:01.427 Discovery Log Change Notices: Supported 00:31:01.427 Controller Attributes 00:31:01.427 128-bit Host Identifier: Not Supported 00:31:01.427 Non-Operational Permissive Mode: Not Supported 00:31:01.427 NVM Sets: Not Supported 00:31:01.427 Read Recovery Levels: Not Supported 00:31:01.427 Endurance Groups: Not Supported 00:31:01.427 Predictable Latency Mode: Not Supported 00:31:01.427 Traffic Based Keep ALive: Not Supported 00:31:01.427 Namespace Granularity: Not Supported 00:31:01.427 SQ Associations: Not Supported 00:31:01.427 UUID List: Not Supported 00:31:01.427 Multi-Domain Subsystem: Not Supported 00:31:01.427 Fixed Capacity Management: Not Supported 00:31:01.427 Variable Capacity Management: Not Supported 00:31:01.427 Delete Endurance Group: Not Supported 00:31:01.427 Delete NVM Set: Not Supported 00:31:01.427 Extended LBA Formats Supported: Not Supported 00:31:01.427 Flexible Data Placement Supported: Not Supported 00:31:01.427 00:31:01.427 Controller Memory Buffer Support 00:31:01.427 ================================ 00:31:01.427 Supported: No 00:31:01.427 00:31:01.427 Persistent Memory Region Support 00:31:01.427 ================================ 00:31:01.427 Supported: No 00:31:01.427 00:31:01.427 Admin Command Set Attributes 00:31:01.427 ============================ 00:31:01.427 Security Send/Receive: Not Supported 00:31:01.427 Format NVM: Not Supported 00:31:01.427 Firmware Activate/Download: Not Supported 00:31:01.427 Namespace Management: Not Supported 00:31:01.427 Device Self-Test: Not Supported 00:31:01.427 Directives: Not Supported 00:31:01.427 NVMe-MI: Not Supported 00:31:01.427 Virtualization Management: Not Supported 00:31:01.427 Doorbell Buffer Config: Not Supported 00:31:01.427 Get LBA Status Capability: Not Supported 00:31:01.427 Command & Feature Lockdown Capability: Not Supported 00:31:01.427 Abort Command Limit: 1 00:31:01.427 Async Event Request Limit: 4 00:31:01.427 Number of Firmware Slots: N/A 00:31:01.427 Firmware Slot 1 Read-Only: N/A 00:31:01.427 Firmware Activation Without Reset: N/A 00:31:01.427 Multiple Update Detection Support: N/A 00:31:01.427 Firmware Update Granularity: No Information Provided 00:31:01.427 Per-Namespace SMART Log: No 00:31:01.427 Asymmetric Namespace Access Log Page: Not Supported 00:31:01.427 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:01.427 Command Effects Log Page: Not Supported 00:31:01.427 Get Log Page Extended Data: Supported 00:31:01.427 Telemetry Log Pages: Not Supported 00:31:01.427 Persistent Event Log Pages: Not Supported 00:31:01.427 Supported Log Pages Log Page: May Support 00:31:01.427 Commands Supported & Effects Log Page: Not Supported 00:31:01.427 Feature Identifiers & Effects Log Page:May Support 00:31:01.427 NVMe-MI Commands & Effects Log Page: May Support 00:31:01.427 Data Area 4 for Telemetry Log: Not Supported 00:31:01.427 Error Log Page Entries Supported: 128 00:31:01.427 Keep Alive: Not Supported 00:31:01.427 00:31:01.427 NVM Command Set Attributes 00:31:01.427 ========================== 00:31:01.427 Submission Queue Entry Size 00:31:01.427 Max: 1 00:31:01.427 Min: 1 00:31:01.427 Completion Queue Entry Size 00:31:01.427 Max: 1 00:31:01.427 Min: 1 00:31:01.427 Number of Namespaces: 0 00:31:01.427 Compare Command: Not Supported 00:31:01.427 Write Uncorrectable Command: Not Supported 00:31:01.427 Dataset Management Command: Not Supported 00:31:01.427 Write Zeroes Command: Not Supported 00:31:01.427 Set Features Save Field: Not Supported 00:31:01.427 Reservations: Not Supported 00:31:01.427 Timestamp: Not Supported 00:31:01.427 Copy: Not Supported 00:31:01.427 Volatile Write Cache: Not Present 00:31:01.427 Atomic Write Unit (Normal): 1 00:31:01.427 Atomic Write Unit (PFail): 1 00:31:01.427 Atomic Compare & Write Unit: 1 00:31:01.427 Fused Compare & Write: Supported 00:31:01.427 Scatter-Gather List 00:31:01.427 SGL Command Set: Supported 00:31:01.427 SGL Keyed: Supported 00:31:01.427 SGL Bit Bucket Descriptor: Not Supported 00:31:01.427 SGL Metadata Pointer: Not Supported 00:31:01.427 Oversized SGL: Not Supported 00:31:01.427 SGL Metadata Address: Not Supported 00:31:01.427 SGL Offset: Supported 00:31:01.427 Transport SGL Data Block: Not Supported 00:31:01.427 Replay Protected Memory Block: Not Supported 00:31:01.427 00:31:01.427 Firmware Slot Information 00:31:01.427 ========================= 00:31:01.427 Active slot: 0 00:31:01.427 00:31:01.427 00:31:01.427 Error Log 00:31:01.427 ========= 00:31:01.427 00:31:01.427 Active Namespaces 00:31:01.427 ================= 00:31:01.427 Discovery Log Page 00:31:01.427 ================== 00:31:01.427 Generation Counter: 2 00:31:01.427 Number of Records: 2 00:31:01.427 Record Format: 0 00:31:01.427 00:31:01.427 Discovery Log Entry 0 00:31:01.427 ---------------------- 00:31:01.427 Transport Type: 3 (TCP) 00:31:01.427 Address Family: 1 (IPv4) 00:31:01.428 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:01.428 Entry Flags: 00:31:01.428 Duplicate Returned Information: 1 00:31:01.428 Explicit Persistent Connection Support for Discovery: 1 00:31:01.428 Transport Requirements: 00:31:01.428 Secure Channel: Not Required 00:31:01.428 Port ID: 0 (0x0000) 00:31:01.428 Controller ID: 65535 (0xffff) 00:31:01.428 Admin Max SQ Size: 128 00:31:01.428 Transport Service Identifier: 4420 00:31:01.428 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:01.428 Transport Address: 10.0.0.2 00:31:01.428 Discovery Log Entry 1 00:31:01.428 ---------------------- 00:31:01.428 Transport Type: 3 (TCP) 00:31:01.428 Address Family: 1 (IPv4) 00:31:01.428 Subsystem Type: 2 (NVM Subsystem) 00:31:01.428 Entry Flags: 00:31:01.428 Duplicate Returned Information: 0 00:31:01.428 Explicit Persistent Connection Support for Discovery: 0 00:31:01.428 Transport Requirements: 00:31:01.428 Secure Channel: Not Required 00:31:01.428 Port ID: 0 (0x0000) 00:31:01.428 Controller ID: 65535 (0xffff) 00:31:01.428 Admin Max SQ Size: 128 00:31:01.428 Transport Service Identifier: 4420 00:31:01.428 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:01.428 Transport Address: 10.0.0.2 [2024-12-16 12:52:27.417345] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:01.428 [2024-12-16 12:52:27.417356] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4540) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.428 [2024-12-16 12:52:27.417367] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a46c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.428 [2024-12-16 12:52:27.417376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a4840) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.428 [2024-12-16 12:52:27.417384] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.428 [2024-12-16 12:52:27.417396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417402] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.417408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.417421] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.417484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.417490] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.417493] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417496] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417502] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417505] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417508] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.417513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.417525] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.417592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.417598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.417601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417604] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417608] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:01.428 [2024-12-16 12:52:27.417615] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:01.428 [2024-12-16 12:52:27.417623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.417635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.417644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.417702] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.417708] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.417711] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417714] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417722] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.417734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.417744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.417802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.417808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.417811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417822] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417829] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.417834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.417843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.417911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.417917] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.417920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417923] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.417931] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.417937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.417943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.417952] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.418021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.418027] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.418030] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.418033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.418040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.418044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.418047] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.418052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.418061] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.422120] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.422138] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.422142] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.422145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.422154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.422158] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.422162] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103a0d0) 00:31:01.428 [2024-12-16 12:52:27.422167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.428 [2024-12-16 12:52:27.422179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a49c0, cid 3, qid 0 00:31:01.428 [2024-12-16 12:52:27.422314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.428 [2024-12-16 12:52:27.422320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.428 [2024-12-16 12:52:27.422323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.428 [2024-12-16 12:52:27.422327] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a49c0) on tqpair=0x103a0d0 00:31:01.428 [2024-12-16 12:52:27.422333] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:31:01.428 00:31:01.428 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:01.428 [2024-12-16 12:52:27.459365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:01.429 [2024-12-16 12:52:27.459410] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493290 ] 00:31:01.692 [2024-12-16 12:52:27.485098] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:01.692 [2024-12-16 12:52:27.489150] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:01.692 [2024-12-16 12:52:27.489156] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:01.692 [2024-12-16 12:52:27.489169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:01.692 [2024-12-16 12:52:27.489178] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:01.692 [2024-12-16 12:52:27.489567] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:01.692 [2024-12-16 12:52:27.489591] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x243b0d0 0 00:31:01.692 [2024-12-16 12:52:27.504123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:01.692 [2024-12-16 12:52:27.504137] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:01.692 [2024-12-16 12:52:27.504141] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:01.692 [2024-12-16 12:52:27.504144] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:01.692 [2024-12-16 12:52:27.504166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.504171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.504174] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.504184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:01.692 [2024-12-16 12:52:27.504203] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.512123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.512131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.512134] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.512145] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:01.692 [2024-12-16 12:52:27.512151] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:01.692 [2024-12-16 12:52:27.512156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:01.692 [2024-12-16 12:52:27.512166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512170] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.512180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.512193] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.512369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.512375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.512378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512381] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.512385] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:01.692 [2024-12-16 12:52:27.512392] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:01.692 [2024-12-16 12:52:27.512398] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.512410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.512420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.512482] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.512488] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.512491] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512494] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.512498] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:01.692 [2024-12-16 12:52:27.512505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:01.692 [2024-12-16 12:52:27.512511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.512523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.512535] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.512598] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.512604] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.512607] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512610] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.512614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:01.692 [2024-12-16 12:52:27.512622] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512629] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.512635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.512644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.512709] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.512715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.512718] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.512725] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:01.692 [2024-12-16 12:52:27.512729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:01.692 [2024-12-16 12:52:27.512735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:01.692 [2024-12-16 12:52:27.512840] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:01.692 [2024-12-16 12:52:27.512843] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:01.692 [2024-12-16 12:52:27.512850] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512856] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.512861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.512871] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.512934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.512939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.512943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.512950] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:01.692 [2024-12-16 12:52:27.512957] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.512964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.512969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.512981] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.513044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.692 [2024-12-16 12:52:27.513050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.692 [2024-12-16 12:52:27.513052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.513056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.692 [2024-12-16 12:52:27.513060] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:01.692 [2024-12-16 12:52:27.513063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:01.692 [2024-12-16 12:52:27.513070] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:01.692 [2024-12-16 12:52:27.513079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:01.692 [2024-12-16 12:52:27.513086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-12-16 12:52:27.513090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.692 [2024-12-16 12:52:27.513095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.692 [2024-12-16 12:52:27.513104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.692 [2024-12-16 12:52:27.513191] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.692 [2024-12-16 12:52:27.513197] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.692 [2024-12-16 12:52:27.513200] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513203] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=4096, cccid=0 00:31:01.693 [2024-12-16 12:52:27.513207] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5540) on tqpair(0x243b0d0): expected_datao=0, payload_size=4096 00:31:01.693 [2024-12-16 12:52:27.513211] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513222] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513225] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513257] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-12-16 12:52:27.513262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-12-16 12:52:27.513265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.693 [2024-12-16 12:52:27.513274] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:01.693 [2024-12-16 12:52:27.513278] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:01.693 [2024-12-16 12:52:27.513282] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:01.693 [2024-12-16 12:52:27.513286] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:01.693 [2024-12-16 12:52:27.513289] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:01.693 [2024-12-16 12:52:27.513293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513312] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:01.693 [2024-12-16 12:52:27.513331] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.693 [2024-12-16 12:52:27.513395] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-12-16 12:52:27.513400] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-12-16 12:52:27.513403] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513406] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.693 [2024-12-16 12:52:27.513412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513415] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513418] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.693 [2024-12-16 12:52:27.513428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513431] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513434] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.693 [2024-12-16 12:52:27.513444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.693 [2024-12-16 12:52:27.513460] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.693 [2024-12-16 12:52:27.513474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513490] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513493] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-12-16 12:52:27.513509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5540, cid 0, qid 0 00:31:01.693 [2024-12-16 12:52:27.513513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a56c0, cid 1, qid 0 00:31:01.693 [2024-12-16 12:52:27.513517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5840, cid 2, qid 0 00:31:01.693 [2024-12-16 12:52:27.513521] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.693 [2024-12-16 12:52:27.513527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.693 [2024-12-16 12:52:27.513618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-12-16 12:52:27.513623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-12-16 12:52:27.513626] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.693 [2024-12-16 12:52:27.513633] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:01.693 [2024-12-16 12:52:27.513637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513659] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:01.693 [2024-12-16 12:52:27.513681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.693 [2024-12-16 12:52:27.513748] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-12-16 12:52:27.513754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-12-16 12:52:27.513757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.693 [2024-12-16 12:52:27.513810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.513825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513828] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.513833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-12-16 12:52:27.513843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.693 [2024-12-16 12:52:27.513917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.693 [2024-12-16 12:52:27.513922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.693 [2024-12-16 12:52:27.513925] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=4096, cccid=4 00:31:01.693 [2024-12-16 12:52:27.513932] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5b40) on tqpair(0x243b0d0): expected_datao=0, payload_size=4096 00:31:01.693 [2024-12-16 12:52:27.513936] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513948] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.513951] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.559121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-12-16 12:52:27.559131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-12-16 12:52:27.559137] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.559141] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.693 [2024-12-16 12:52:27.559150] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:01.693 [2024-12-16 12:52:27.559158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.559167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:01.693 [2024-12-16 12:52:27.559174] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.559177] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.693 [2024-12-16 12:52:27.559184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-12-16 12:52:27.559196] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.693 [2024-12-16 12:52:27.559371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.693 [2024-12-16 12:52:27.559377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.693 [2024-12-16 12:52:27.559380] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.559383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=4096, cccid=4 00:31:01.693 [2024-12-16 12:52:27.559387] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5b40) on tqpair(0x243b0d0): expected_datao=0, payload_size=4096 00:31:01.693 [2024-12-16 12:52:27.559391] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.559397] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.559400] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.693 [2024-12-16 12:52:27.601245] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.601255] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.601258] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.601261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.601275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.601285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.601291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.601295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.601302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.601313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.694 [2024-12-16 12:52:27.601386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.694 [2024-12-16 12:52:27.601392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.694 [2024-12-16 12:52:27.601395] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.601398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=4096, cccid=4 00:31:01.694 [2024-12-16 12:52:27.601402] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5b40) on tqpair(0x243b0d0): expected_datao=0, payload_size=4096 00:31:01.694 [2024-12-16 12:52:27.601406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.601414] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.601417] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646119] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.646128] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.646131] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646134] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.646141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646177] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:01.694 [2024-12-16 12:52:27.646181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:01.694 [2024-12-16 12:52:27.646186] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:01.694 [2024-12-16 12:52:27.646198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.694 [2024-12-16 12:52:27.646238] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.694 [2024-12-16 12:52:27.646243] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5cc0, cid 5, qid 0 00:31:01.694 [2024-12-16 12:52:27.646321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.646326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.646329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.646338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.646343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.646346] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646349] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5cc0) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.646357] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646360] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646377] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5cc0, cid 5, qid 0 00:31:01.694 [2024-12-16 12:52:27.646443] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.646449] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.646452] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646455] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5cc0) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.646462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5cc0, cid 5, qid 0 00:31:01.694 [2024-12-16 12:52:27.646560] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.646565] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.646568] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646571] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5cc0) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.646579] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646583] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5cc0, cid 5, qid 0 00:31:01.694 [2024-12-16 12:52:27.646659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-12-16 12:52:27.646665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-12-16 12:52:27.646667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5cc0) on tqpair=0x243b0d0 00:31:01.694 [2024-12-16 12:52:27.646684] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646688] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646699] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646728] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x243b0d0) 00:31:01.694 [2024-12-16 12:52:27.646738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-12-16 12:52:27.646749] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5cc0, cid 5, qid 0 00:31:01.694 [2024-12-16 12:52:27.646753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5b40, cid 4, qid 0 00:31:01.694 [2024-12-16 12:52:27.646757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5e40, cid 6, qid 0 00:31:01.694 [2024-12-16 12:52:27.646761] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5fc0, cid 7, qid 0 00:31:01.694 [2024-12-16 12:52:27.646918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.694 [2024-12-16 12:52:27.646923] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.694 [2024-12-16 12:52:27.646927] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=8192, cccid=5 00:31:01.694 [2024-12-16 12:52:27.646933] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5cc0) on tqpair(0x243b0d0): expected_datao=0, payload_size=8192 00:31:01.694 [2024-12-16 12:52:27.646937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646951] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646954] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.694 [2024-12-16 12:52:27.646963] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.694 [2024-12-16 12:52:27.646966] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646969] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=512, cccid=4 00:31:01.694 [2024-12-16 12:52:27.646973] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5b40) on tqpair(0x243b0d0): expected_datao=0, payload_size=512 00:31:01.694 [2024-12-16 12:52:27.646977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-12-16 12:52:27.646982] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.646985] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.646989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.695 [2024-12-16 12:52:27.646994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.695 [2024-12-16 12:52:27.646997] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647000] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=512, cccid=6 00:31:01.695 [2024-12-16 12:52:27.647004] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5e40) on tqpair(0x243b0d0): expected_datao=0, payload_size=512 00:31:01.695 [2024-12-16 12:52:27.647007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647012] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647015] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.695 [2024-12-16 12:52:27.647025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.695 [2024-12-16 12:52:27.647028] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647030] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243b0d0): datao=0, datal=4096, cccid=7 00:31:01.695 [2024-12-16 12:52:27.647034] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a5fc0) on tqpair(0x243b0d0): expected_datao=0, payload_size=4096 00:31:01.695 [2024-12-16 12:52:27.647038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647043] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647048] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647055] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-12-16 12:52:27.647059] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-12-16 12:52:27.647062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5cc0) on tqpair=0x243b0d0 00:31:01.695 [2024-12-16 12:52:27.647074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-12-16 12:52:27.647080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-12-16 12:52:27.647082] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5b40) on tqpair=0x243b0d0 00:31:01.695 [2024-12-16 12:52:27.647095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-12-16 12:52:27.647100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-12-16 12:52:27.647103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647106] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5e40) on tqpair=0x243b0d0 00:31:01.695 [2024-12-16 12:52:27.647112] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-12-16 12:52:27.647121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-12-16 12:52:27.647124] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-12-16 12:52:27.647127] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5fc0) on tqpair=0x243b0d0 00:31:01.695 ===================================================== 00:31:01.695 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.695 ===================================================== 00:31:01.695 Controller Capabilities/Features 00:31:01.695 ================================ 00:31:01.695 Vendor ID: 8086 00:31:01.695 Subsystem Vendor ID: 8086 00:31:01.695 Serial Number: SPDK00000000000001 00:31:01.695 Model Number: SPDK bdev Controller 00:31:01.695 Firmware Version: 24.09.1 00:31:01.695 Recommended Arb Burst: 6 00:31:01.695 IEEE OUI Identifier: e4 d2 5c 00:31:01.695 Multi-path I/O 00:31:01.695 May have multiple subsystem ports: Yes 00:31:01.695 May have multiple controllers: Yes 00:31:01.695 Associated with SR-IOV VF: No 00:31:01.695 Max Data Transfer Size: 131072 00:31:01.695 Max Number of Namespaces: 32 00:31:01.695 Max Number of I/O Queues: 127 00:31:01.695 NVMe Specification Version (VS): 1.3 00:31:01.695 NVMe Specification Version (Identify): 1.3 00:31:01.695 Maximum Queue Entries: 128 00:31:01.695 Contiguous Queues Required: Yes 00:31:01.695 Arbitration Mechanisms Supported 00:31:01.695 Weighted Round Robin: Not Supported 00:31:01.695 Vendor Specific: Not Supported 00:31:01.695 Reset Timeout: 15000 ms 00:31:01.695 Doorbell Stride: 4 bytes 00:31:01.695 NVM Subsystem Reset: Not Supported 00:31:01.695 Command Sets Supported 00:31:01.695 NVM Command Set: Supported 00:31:01.695 Boot Partition: Not Supported 00:31:01.695 Memory Page Size Minimum: 4096 bytes 00:31:01.695 Memory Page Size Maximum: 4096 bytes 00:31:01.695 Persistent Memory Region: Not Supported 00:31:01.695 Optional Asynchronous Events Supported 00:31:01.695 Namespace Attribute Notices: Supported 00:31:01.695 Firmware Activation Notices: Not Supported 00:31:01.695 ANA Change Notices: Not Supported 00:31:01.695 PLE Aggregate Log Change Notices: Not Supported 00:31:01.695 LBA Status Info Alert Notices: Not Supported 00:31:01.695 EGE Aggregate Log Change Notices: Not Supported 00:31:01.695 Normal NVM Subsystem Shutdown event: Not Supported 00:31:01.695 Zone Descriptor Change Notices: Not Supported 00:31:01.695 Discovery Log Change Notices: Not Supported 00:31:01.695 Controller Attributes 00:31:01.695 128-bit Host Identifier: Supported 00:31:01.695 Non-Operational Permissive Mode: Not Supported 00:31:01.695 NVM Sets: Not Supported 00:31:01.695 Read Recovery Levels: Not Supported 00:31:01.695 Endurance Groups: Not Supported 00:31:01.695 Predictable Latency Mode: Not Supported 00:31:01.695 Traffic Based Keep ALive: Not Supported 00:31:01.695 Namespace Granularity: Not Supported 00:31:01.695 SQ Associations: Not Supported 00:31:01.695 UUID List: Not Supported 00:31:01.695 Multi-Domain Subsystem: Not Supported 00:31:01.695 Fixed Capacity Management: Not Supported 00:31:01.695 Variable Capacity Management: Not Supported 00:31:01.695 Delete Endurance Group: Not Supported 00:31:01.695 Delete NVM Set: Not Supported 00:31:01.695 Extended LBA Formats Supported: Not Supported 00:31:01.695 Flexible Data Placement Supported: Not Supported 00:31:01.695 00:31:01.695 Controller Memory Buffer Support 00:31:01.695 ================================ 00:31:01.695 Supported: No 00:31:01.695 00:31:01.695 Persistent Memory Region Support 00:31:01.695 ================================ 00:31:01.695 Supported: No 00:31:01.695 00:31:01.695 Admin Command Set Attributes 00:31:01.695 ============================ 00:31:01.695 Security Send/Receive: Not Supported 00:31:01.695 Format NVM: Not Supported 00:31:01.695 Firmware Activate/Download: Not Supported 00:31:01.695 Namespace Management: Not Supported 00:31:01.695 Device Self-Test: Not Supported 00:31:01.695 Directives: Not Supported 00:31:01.695 NVMe-MI: Not Supported 00:31:01.695 Virtualization Management: Not Supported 00:31:01.695 Doorbell Buffer Config: Not Supported 00:31:01.695 Get LBA Status Capability: Not Supported 00:31:01.695 Command & Feature Lockdown Capability: Not Supported 00:31:01.695 Abort Command Limit: 4 00:31:01.695 Async Event Request Limit: 4 00:31:01.695 Number of Firmware Slots: N/A 00:31:01.695 Firmware Slot 1 Read-Only: N/A 00:31:01.695 Firmware Activation Without Reset: N/A 00:31:01.695 Multiple Update Detection Support: N/A 00:31:01.695 Firmware Update Granularity: No Information Provided 00:31:01.695 Per-Namespace SMART Log: No 00:31:01.695 Asymmetric Namespace Access Log Page: Not Supported 00:31:01.695 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:01.695 Command Effects Log Page: Supported 00:31:01.695 Get Log Page Extended Data: Supported 00:31:01.695 Telemetry Log Pages: Not Supported 00:31:01.695 Persistent Event Log Pages: Not Supported 00:31:01.695 Supported Log Pages Log Page: May Support 00:31:01.695 Commands Supported & Effects Log Page: Not Supported 00:31:01.695 Feature Identifiers & Effects Log Page:May Support 00:31:01.695 NVMe-MI Commands & Effects Log Page: May Support 00:31:01.695 Data Area 4 for Telemetry Log: Not Supported 00:31:01.695 Error Log Page Entries Supported: 128 00:31:01.695 Keep Alive: Supported 00:31:01.695 Keep Alive Granularity: 10000 ms 00:31:01.695 00:31:01.695 NVM Command Set Attributes 00:31:01.695 ========================== 00:31:01.695 Submission Queue Entry Size 00:31:01.695 Max: 64 00:31:01.695 Min: 64 00:31:01.695 Completion Queue Entry Size 00:31:01.695 Max: 16 00:31:01.695 Min: 16 00:31:01.695 Number of Namespaces: 32 00:31:01.695 Compare Command: Supported 00:31:01.695 Write Uncorrectable Command: Not Supported 00:31:01.695 Dataset Management Command: Supported 00:31:01.695 Write Zeroes Command: Supported 00:31:01.695 Set Features Save Field: Not Supported 00:31:01.695 Reservations: Supported 00:31:01.695 Timestamp: Not Supported 00:31:01.695 Copy: Supported 00:31:01.695 Volatile Write Cache: Present 00:31:01.695 Atomic Write Unit (Normal): 1 00:31:01.695 Atomic Write Unit (PFail): 1 00:31:01.695 Atomic Compare & Write Unit: 1 00:31:01.695 Fused Compare & Write: Supported 00:31:01.695 Scatter-Gather List 00:31:01.695 SGL Command Set: Supported 00:31:01.695 SGL Keyed: Supported 00:31:01.695 SGL Bit Bucket Descriptor: Not Supported 00:31:01.695 SGL Metadata Pointer: Not Supported 00:31:01.695 Oversized SGL: Not Supported 00:31:01.695 SGL Metadata Address: Not Supported 00:31:01.695 SGL Offset: Supported 00:31:01.695 Transport SGL Data Block: Not Supported 00:31:01.695 Replay Protected Memory Block: Not Supported 00:31:01.695 00:31:01.695 Firmware Slot Information 00:31:01.695 ========================= 00:31:01.695 Active slot: 1 00:31:01.695 Slot 1 Firmware Revision: 24.09.1 00:31:01.695 00:31:01.695 00:31:01.696 Commands Supported and Effects 00:31:01.696 ============================== 00:31:01.696 Admin Commands 00:31:01.696 -------------- 00:31:01.696 Get Log Page (02h): Supported 00:31:01.696 Identify (06h): Supported 00:31:01.696 Abort (08h): Supported 00:31:01.696 Set Features (09h): Supported 00:31:01.696 Get Features (0Ah): Supported 00:31:01.696 Asynchronous Event Request (0Ch): Supported 00:31:01.696 Keep Alive (18h): Supported 00:31:01.696 I/O Commands 00:31:01.696 ------------ 00:31:01.696 Flush (00h): Supported LBA-Change 00:31:01.696 Write (01h): Supported LBA-Change 00:31:01.696 Read (02h): Supported 00:31:01.696 Compare (05h): Supported 00:31:01.696 Write Zeroes (08h): Supported LBA-Change 00:31:01.696 Dataset Management (09h): Supported LBA-Change 00:31:01.696 Copy (19h): Supported LBA-Change 00:31:01.696 00:31:01.696 Error Log 00:31:01.696 ========= 00:31:01.696 00:31:01.696 Arbitration 00:31:01.696 =========== 00:31:01.696 Arbitration Burst: 1 00:31:01.696 00:31:01.696 Power Management 00:31:01.696 ================ 00:31:01.696 Number of Power States: 1 00:31:01.696 Current Power State: Power State #0 00:31:01.696 Power State #0: 00:31:01.696 Max Power: 0.00 W 00:31:01.696 Non-Operational State: Operational 00:31:01.696 Entry Latency: Not Reported 00:31:01.696 Exit Latency: Not Reported 00:31:01.696 Relative Read Throughput: 0 00:31:01.696 Relative Read Latency: 0 00:31:01.696 Relative Write Throughput: 0 00:31:01.696 Relative Write Latency: 0 00:31:01.696 Idle Power: Not Reported 00:31:01.696 Active Power: Not Reported 00:31:01.696 Non-Operational Permissive Mode: Not Supported 00:31:01.696 00:31:01.696 Health Information 00:31:01.696 ================== 00:31:01.696 Critical Warnings: 00:31:01.696 Available Spare Space: OK 00:31:01.696 Temperature: OK 00:31:01.696 Device Reliability: OK 00:31:01.696 Read Only: No 00:31:01.696 Volatile Memory Backup: OK 00:31:01.696 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:01.696 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:01.696 Available Spare: 0% 00:31:01.696 Available Spare Threshold: 0% 00:31:01.696 Life Percentage U[2024-12-16 12:52:27.647206] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647211] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x243b0d0) 00:31:01.696 [2024-12-16 12:52:27.647216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-12-16 12:52:27.647227] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a5fc0, cid 7, qid 0 00:31:01.696 [2024-12-16 12:52:27.647298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-12-16 12:52:27.647304] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-12-16 12:52:27.647307] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5fc0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647336] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:01.696 [2024-12-16 12:52:27.647345] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5540) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.696 [2024-12-16 12:52:27.647354] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a56c0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.696 [2024-12-16 12:52:27.647363] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a5840) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.696 [2024-12-16 12:52:27.647371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.696 [2024-12-16 12:52:27.647381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.696 [2024-12-16 12:52:27.647395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-12-16 12:52:27.647406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.696 [2024-12-16 12:52:27.647473] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-12-16 12:52:27.647478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-12-16 12:52:27.647481] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647493] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647496] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.696 [2024-12-16 12:52:27.647502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-12-16 12:52:27.647514] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.696 [2024-12-16 12:52:27.647595] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-12-16 12:52:27.647601] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-12-16 12:52:27.647604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647607] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647611] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:01.696 [2024-12-16 12:52:27.647614] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:01.696 [2024-12-16 12:52:27.647622] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647629] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.696 [2024-12-16 12:52:27.647634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-12-16 12:52:27.647644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.696 [2024-12-16 12:52:27.647711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-12-16 12:52:27.647717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-12-16 12:52:27.647719] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647723] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647730] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647737] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.696 [2024-12-16 12:52:27.647742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-12-16 12:52:27.647751] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.696 [2024-12-16 12:52:27.647815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-12-16 12:52:27.647821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-12-16 12:52:27.647824] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.696 [2024-12-16 12:52:27.647837] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647840] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-12-16 12:52:27.647843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.696 [2024-12-16 12:52:27.647849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-12-16 12:52:27.647858] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.696 [2024-12-16 12:52:27.647923] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-12-16 12:52:27.647929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.647931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.647935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.647943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.647947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.647950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.647955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.647965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648033] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648036] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648039] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648048] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648051] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648054] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648146] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648158] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648176] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648251] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648254] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648285] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648356] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648359] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648362] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648374] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648377] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648476] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648480] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648483] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648497] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648558] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648564] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648567] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648599] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648675] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648678] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648686] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648693] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.648771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.648776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.648779] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648783] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.648790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.648797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.648802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.648811] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.652118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.652126] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.652129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.652132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.652141] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.652145] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.652148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243b0d0) 00:31:01.697 [2024-12-16 12:52:27.652154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-12-16 12:52:27.652164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a59c0, cid 3, qid 0 00:31:01.697 [2024-12-16 12:52:27.652296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-12-16 12:52:27.652301] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-12-16 12:52:27.652304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-12-16 12:52:27.652307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a59c0) on tqpair=0x243b0d0 00:31:01.697 [2024-12-16 12:52:27.652314] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:31:01.697 sed: 0% 00:31:01.697 Data Units Read: 0 00:31:01.697 Data Units Written: 0 00:31:01.697 Host Read Commands: 0 00:31:01.697 Host Write Commands: 0 00:31:01.697 Controller Busy Time: 0 minutes 00:31:01.697 Power Cycles: 0 00:31:01.697 Power On Hours: 0 hours 00:31:01.697 Unsafe Shutdowns: 0 00:31:01.697 Unrecoverable Media Errors: 0 00:31:01.697 Lifetime Error Log Entries: 0 00:31:01.697 Warning Temperature Time: 0 minutes 00:31:01.697 Critical Temperature Time: 0 minutes 00:31:01.697 00:31:01.697 Number of Queues 00:31:01.697 ================ 00:31:01.697 Number of I/O Submission Queues: 127 00:31:01.697 Number of I/O Completion Queues: 127 00:31:01.697 00:31:01.697 Active Namespaces 00:31:01.697 ================= 00:31:01.697 Namespace ID:1 00:31:01.697 Error Recovery Timeout: Unlimited 00:31:01.697 Command Set Identifier: NVM (00h) 00:31:01.697 Deallocate: Supported 00:31:01.697 Deallocated/Unwritten Error: Not Supported 00:31:01.697 Deallocated Read Value: Unknown 00:31:01.697 Deallocate in Write Zeroes: Not Supported 00:31:01.697 Deallocated Guard Field: 0xFFFF 00:31:01.697 Flush: Supported 00:31:01.697 Reservation: Supported 00:31:01.697 Namespace Sharing Capabilities: Multiple Controllers 00:31:01.697 Size (in LBAs): 131072 (0GiB) 00:31:01.697 Capacity (in LBAs): 131072 (0GiB) 00:31:01.698 Utilization (in LBAs): 131072 (0GiB) 00:31:01.698 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:01.698 EUI64: ABCDEF0123456789 00:31:01.698 UUID: 63b2c0ad-e184-4be7-84dd-57d6e899e1f3 00:31:01.698 Thin Provisioning: Not Supported 00:31:01.698 Per-NS Atomic Units: Yes 00:31:01.698 Atomic Boundary Size (Normal): 0 00:31:01.698 Atomic Boundary Size (PFail): 0 00:31:01.698 Atomic Boundary Offset: 0 00:31:01.698 Maximum Single Source Range Length: 65535 00:31:01.698 Maximum Copy Length: 65535 00:31:01.698 Maximum Source Range Count: 1 00:31:01.698 NGUID/EUI64 Never Reused: No 00:31:01.698 Namespace Write Protected: No 00:31:01.698 Number of LBA Formats: 1 00:31:01.698 Current LBA Format: LBA Format #00 00:31:01.698 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:01.698 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.698 rmmod nvme_tcp 00:31:01.698 rmmod nvme_fabrics 00:31:01.698 rmmod nvme_keyring 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 493054 ']' 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 493054 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 493054 ']' 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 493054 00:31:01.698 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 493054 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 493054' 00:31:01.957 killing process with pid 493054 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 493054 00:31:01.957 12:52:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 493054 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.957 12:52:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.494 00:31:04.494 real 0m9.405s 00:31:04.494 user 0m5.763s 00:31:04.494 sys 0m4.892s 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.494 ************************************ 00:31:04.494 END TEST nvmf_identify 00:31:04.494 ************************************ 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.494 ************************************ 00:31:04.494 START TEST nvmf_perf 00:31:04.494 ************************************ 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:04.494 * Looking for test storage... 00:31:04.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.494 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.495 --rc genhtml_branch_coverage=1 00:31:04.495 --rc genhtml_function_coverage=1 00:31:04.495 --rc genhtml_legend=1 00:31:04.495 --rc geninfo_all_blocks=1 00:31:04.495 --rc geninfo_unexecuted_blocks=1 00:31:04.495 00:31:04.495 ' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.495 --rc genhtml_branch_coverage=1 00:31:04.495 --rc genhtml_function_coverage=1 00:31:04.495 --rc genhtml_legend=1 00:31:04.495 --rc geninfo_all_blocks=1 00:31:04.495 --rc geninfo_unexecuted_blocks=1 00:31:04.495 00:31:04.495 ' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.495 --rc genhtml_branch_coverage=1 00:31:04.495 --rc genhtml_function_coverage=1 00:31:04.495 --rc genhtml_legend=1 00:31:04.495 --rc geninfo_all_blocks=1 00:31:04.495 --rc geninfo_unexecuted_blocks=1 00:31:04.495 00:31:04.495 ' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.495 --rc genhtml_branch_coverage=1 00:31:04.495 --rc genhtml_function_coverage=1 00:31:04.495 --rc genhtml_legend=1 00:31:04.495 --rc geninfo_all_blocks=1 00:31:04.495 --rc geninfo_unexecuted_blocks=1 00:31:04.495 00:31:04.495 ' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.495 12:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:11.066 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:11.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:11.066 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:11.067 Found net devices under 0000:af:00.0: cvl_0_0 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:11.067 Found net devices under 0000:af:00.1: cvl_0_1 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.067 12:52:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:11.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:31:11.067 00:31:11.067 --- 10.0.0.2 ping statistics --- 00:31:11.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.067 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:31:11.067 00:31:11.067 --- 10.0.0.1 ping statistics --- 00:31:11.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.067 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=496752 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 496752 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 496752 ']' 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:11.067 [2024-12-16 12:52:36.234313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:11.067 [2024-12-16 12:52:36.234363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.067 [2024-12-16 12:52:36.308530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.067 [2024-12-16 12:52:36.349806] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.067 [2024-12-16 12:52:36.349842] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.067 [2024-12-16 12:52:36.349850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.067 [2024-12-16 12:52:36.349856] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.067 [2024-12-16 12:52:36.349861] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.067 [2024-12-16 12:52:36.349920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.067 [2024-12-16 12:52:36.350007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.067 [2024-12-16 12:52:36.350130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.067 [2024-12-16 12:52:36.350119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:11.067 12:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:13.603 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:13.603 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:13.862 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:13.862 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.121 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:14.121 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:14.121 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:14.121 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:14.121 12:52:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:14.121 [2024-12-16 12:52:40.114985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.121 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.380 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:14.380 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.637 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:14.637 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:14.895 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.895 [2024-12-16 12:52:40.935339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.153 12:52:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.153 12:52:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:15.153 12:52:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:15.153 12:52:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:15.153 12:52:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:16.531 Initializing NVMe Controllers 00:31:16.531 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:16.531 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:16.531 Initialization complete. Launching workers. 00:31:16.531 ======================================================== 00:31:16.531 Latency(us) 00:31:16.531 Device Information : IOPS MiB/s Average min max 00:31:16.531 PCIE (0000:5e:00.0) NSID 1 from core 0: 98451.04 384.57 324.42 26.35 4556.24 00:31:16.531 ======================================================== 00:31:16.531 Total : 98451.04 384.57 324.42 26.35 4556.24 00:31:16.531 00:31:16.531 12:52:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.909 Initializing NVMe Controllers 00:31:17.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:17.909 Initialization complete. Launching workers. 00:31:17.909 ======================================================== 00:31:17.909 Latency(us) 00:31:17.909 Device Information : IOPS MiB/s Average min max 00:31:17.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 53.00 0.21 19131.61 112.77 45674.88 00:31:17.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14151.53 7965.54 47884.39 00:31:17.909 ======================================================== 00:31:17.909 Total : 124.00 0.48 16280.11 112.77 47884.39 00:31:17.909 00:31:17.909 12:52:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.847 Initializing NVMe Controllers 00:31:18.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:18.847 Initialization complete. Launching workers. 00:31:18.847 ======================================================== 00:31:18.847 Latency(us) 00:31:18.847 Device Information : IOPS MiB/s Average min max 00:31:18.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11167.98 43.62 2877.29 421.96 6174.03 00:31:18.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3819.99 14.92 8414.53 5276.37 16584.65 00:31:18.847 ======================================================== 00:31:18.847 Total : 14987.97 58.55 4288.57 421.96 16584.65 00:31:18.847 00:31:18.847 12:52:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:18.847 12:52:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:18.847 12:52:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:21.380 Initializing NVMe Controllers 00:31:21.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.380 Controller IO queue size 128, less than required. 00:31:21.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.380 Controller IO queue size 128, less than required. 00:31:21.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:21.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:21.380 Initialization complete. Launching workers. 00:31:21.380 ======================================================== 00:31:21.380 Latency(us) 00:31:21.380 Device Information : IOPS MiB/s Average min max 00:31:21.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1817.33 454.33 71298.07 47844.15 130996.29 00:31:21.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.45 144.11 230034.21 77204.01 351245.55 00:31:21.380 ======================================================== 00:31:21.380 Total : 2393.78 598.45 109523.37 47844.15 351245.55 00:31:21.380 00:31:21.380 12:52:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:21.639 No valid NVMe controllers or AIO or URING devices found 00:31:21.639 Initializing NVMe Controllers 00:31:21.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.639 Controller IO queue size 128, less than required. 00:31:21.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.639 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:21.639 Controller IO queue size 128, less than required. 00:31:21.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.639 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:21.639 WARNING: Some requested NVMe devices were skipped 00:31:21.639 12:52:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:24.928 Initializing NVMe Controllers 00:31:24.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.928 Controller IO queue size 128, less than required. 00:31:24.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.928 Controller IO queue size 128, less than required. 00:31:24.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:24.928 Initialization complete. Launching workers. 00:31:24.928 00:31:24.928 ==================== 00:31:24.928 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:24.928 TCP transport: 00:31:24.928 polls: 11876 00:31:24.928 idle_polls: 8356 00:31:24.928 sock_completions: 3520 00:31:24.928 nvme_completions: 6373 00:31:24.928 submitted_requests: 9530 00:31:24.928 queued_requests: 1 00:31:24.928 00:31:24.928 ==================== 00:31:24.928 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:24.928 TCP transport: 00:31:24.928 polls: 11715 00:31:24.928 idle_polls: 7597 00:31:24.928 sock_completions: 4118 00:31:24.928 nvme_completions: 6921 00:31:24.928 submitted_requests: 10294 00:31:24.928 queued_requests: 1 00:31:24.929 ======================================================== 00:31:24.929 Latency(us) 00:31:24.929 Device Information : IOPS MiB/s Average min max 00:31:24.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1589.55 397.39 83146.09 49147.25 143522.50 00:31:24.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1726.26 431.56 74309.68 39979.68 104506.06 00:31:24.929 ======================================================== 00:31:24.929 Total : 3315.81 828.95 78545.73 39979.68 143522.50 00:31:24.929 00:31:24.929 12:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:24.929 12:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.929 12:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:24.929 12:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:31:24.929 12:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=01ee5c8b-70b0-4517-8b48-366600c8b025 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 01ee5c8b-70b0-4517-8b48-366600c8b025 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=01ee5c8b-70b0-4517-8b48-366600c8b025 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:28.217 { 00:31:28.217 "uuid": "01ee5c8b-70b0-4517-8b48-366600c8b025", 00:31:28.217 "name": "lvs_0", 00:31:28.217 "base_bdev": "Nvme0n1", 00:31:28.217 "total_data_clusters": 238234, 00:31:28.217 "free_clusters": 238234, 00:31:28.217 "block_size": 512, 00:31:28.217 "cluster_size": 4194304 00:31:28.217 } 00:31:28.217 ]' 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="01ee5c8b-70b0-4517-8b48-366600c8b025") .free_clusters' 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="01ee5c8b-70b0-4517-8b48-366600c8b025") .cluster_size' 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:31:28.217 952936 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:28.217 12:52:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 01ee5c8b-70b0-4517-8b48-366600c8b025 lbd_0 20480 00:31:28.476 12:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=861d9f8b-3639-40c7-a9c2-78e0781aa831 00:31:28.476 12:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 861d9f8b-3639-40c7-a9c2-78e0781aa831 lvs_n_0 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=13a5da3f-310b-46c9-a7ed-011e6ba723f0 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 13a5da3f-310b-46c9-a7ed-011e6ba723f0 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=13a5da3f-310b-46c9-a7ed-011e6ba723f0 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:29.413 { 00:31:29.413 "uuid": "01ee5c8b-70b0-4517-8b48-366600c8b025", 00:31:29.413 "name": "lvs_0", 00:31:29.413 "base_bdev": "Nvme0n1", 00:31:29.413 "total_data_clusters": 238234, 00:31:29.413 "free_clusters": 233114, 00:31:29.413 "block_size": 512, 00:31:29.413 "cluster_size": 4194304 00:31:29.413 }, 00:31:29.413 { 00:31:29.413 "uuid": "13a5da3f-310b-46c9-a7ed-011e6ba723f0", 00:31:29.413 "name": "lvs_n_0", 00:31:29.413 "base_bdev": "861d9f8b-3639-40c7-a9c2-78e0781aa831", 00:31:29.413 "total_data_clusters": 5114, 00:31:29.413 "free_clusters": 5114, 00:31:29.413 "block_size": 512, 00:31:29.413 "cluster_size": 4194304 00:31:29.413 } 00:31:29.413 ]' 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="13a5da3f-310b-46c9-a7ed-011e6ba723f0") .free_clusters' 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="13a5da3f-310b-46c9-a7ed-011e6ba723f0") .cluster_size' 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:29.413 20456 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:29.413 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13a5da3f-310b-46c9-a7ed-011e6ba723f0 lbd_nest_0 20456 00:31:29.672 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e500cc08-46bc-4895-8ea4-82db47f07312 00:31:29.672 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:29.931 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:29.931 12:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e500cc08-46bc-4895-8ea4-82db47f07312 00:31:30.190 12:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.190 12:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:30.190 12:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:30.190 12:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:30.449 12:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:30.449 12:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:42.691 Initializing NVMe Controllers 00:31:42.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.691 Initialization complete. Launching workers. 00:31:42.691 ======================================================== 00:31:42.691 Latency(us) 00:31:42.691 Device Information : IOPS MiB/s Average min max 00:31:42.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.50 0.02 19864.57 127.22 45708.47 00:31:42.691 ======================================================== 00:31:42.691 Total : 50.50 0.02 19864.57 127.22 45708.47 00:31:42.691 00:31:42.691 12:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:42.691 12:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:52.672 Initializing NVMe Controllers 00:31:52.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:52.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:52.672 Initialization complete. Launching workers. 00:31:52.672 ======================================================== 00:31:52.672 Latency(us) 00:31:52.672 Device Information : IOPS MiB/s Average min max 00:31:52.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.39 8.92 14006.94 4033.80 51872.16 00:31:52.672 ======================================================== 00:31:52.672 Total : 71.39 8.92 14006.94 4033.80 51872.16 00:31:52.672 00:31:52.672 12:53:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:52.672 12:53:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:52.672 12:53:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:02.650 Initializing NVMe Controllers 00:32:02.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:02.650 Initialization complete. Launching workers. 00:32:02.650 ======================================================== 00:32:02.650 Latency(us) 00:32:02.650 Device Information : IOPS MiB/s Average min max 00:32:02.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8550.48 4.18 3742.01 237.82 10230.23 00:32:02.650 ======================================================== 00:32:02.650 Total : 8550.48 4.18 3742.01 237.82 10230.23 00:32:02.650 00:32:02.650 12:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:02.650 12:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:12.625 Initializing NVMe Controllers 00:32:12.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:12.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:12.625 Initialization complete. Launching workers. 00:32:12.625 ======================================================== 00:32:12.625 Latency(us) 00:32:12.625 Device Information : IOPS MiB/s Average min max 00:32:12.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4467.08 558.38 7163.17 498.48 18220.18 00:32:12.625 ======================================================== 00:32:12.625 Total : 4467.08 558.38 7163.17 498.48 18220.18 00:32:12.625 00:32:12.625 12:53:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:12.625 12:53:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:12.625 12:53:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.613 Initializing NVMe Controllers 00:32:22.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.613 Controller IO queue size 128, less than required. 00:32:22.613 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:22.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.613 Initialization complete. Launching workers. 00:32:22.613 ======================================================== 00:32:22.613 Latency(us) 00:32:22.613 Device Information : IOPS MiB/s Average min max 00:32:22.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15906.55 7.77 8046.70 1288.34 17604.28 00:32:22.613 ======================================================== 00:32:22.613 Total : 15906.55 7.77 8046.70 1288.34 17604.28 00:32:22.613 00:32:22.613 12:53:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:22.613 12:53:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.595 Initializing NVMe Controllers 00:32:32.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.595 Controller IO queue size 128, less than required. 00:32:32.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:32.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:32.595 Initialization complete. Launching workers. 00:32:32.595 ======================================================== 00:32:32.595 Latency(us) 00:32:32.595 Device Information : IOPS MiB/s Average min max 00:32:32.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1198.10 149.76 107200.64 9133.99 229146.21 00:32:32.595 ======================================================== 00:32:32.595 Total : 1198.10 149.76 107200.64 9133.99 229146.21 00:32:32.595 00:32:32.595 12:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:32.595 12:53:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e500cc08-46bc-4895-8ea4-82db47f07312 00:32:33.163 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:33.421 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 861d9f8b-3639-40c7-a9c2-78e0781aa831 00:32:33.680 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.938 rmmod nvme_tcp 00:32:33.938 rmmod nvme_fabrics 00:32:33.938 rmmod nvme_keyring 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 496752 ']' 00:32:33.938 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 496752 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 496752 ']' 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 496752 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 496752 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 496752' 00:32:33.939 killing process with pid 496752 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 496752 00:32:33.939 12:53:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 496752 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.843 12:54:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.749 12:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:37.749 00:32:37.749 real 1m33.394s 00:32:37.749 user 5m33.050s 00:32:37.749 sys 0m17.058s 00:32:37.749 12:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.749 12:54:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:37.749 ************************************ 00:32:37.749 END TEST nvmf_perf 00:32:37.749 ************************************ 00:32:37.749 12:54:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:37.749 12:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.750 ************************************ 00:32:37.750 START TEST nvmf_fio_host 00:32:37.750 ************************************ 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:37.750 * Looking for test storage... 00:32:37.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.750 --rc genhtml_branch_coverage=1 00:32:37.750 --rc genhtml_function_coverage=1 00:32:37.750 --rc genhtml_legend=1 00:32:37.750 --rc geninfo_all_blocks=1 00:32:37.750 --rc geninfo_unexecuted_blocks=1 00:32:37.750 00:32:37.750 ' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.750 --rc genhtml_branch_coverage=1 00:32:37.750 --rc genhtml_function_coverage=1 00:32:37.750 --rc genhtml_legend=1 00:32:37.750 --rc geninfo_all_blocks=1 00:32:37.750 --rc geninfo_unexecuted_blocks=1 00:32:37.750 00:32:37.750 ' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.750 --rc genhtml_branch_coverage=1 00:32:37.750 --rc genhtml_function_coverage=1 00:32:37.750 --rc genhtml_legend=1 00:32:37.750 --rc geninfo_all_blocks=1 00:32:37.750 --rc geninfo_unexecuted_blocks=1 00:32:37.750 00:32:37.750 ' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.750 --rc genhtml_branch_coverage=1 00:32:37.750 --rc genhtml_function_coverage=1 00:32:37.750 --rc genhtml_legend=1 00:32:37.750 --rc geninfo_all_blocks=1 00:32:37.750 --rc geninfo_unexecuted_blocks=1 00:32:37.750 00:32:37.750 ' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.750 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:37.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:37.751 12:54:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.322 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:44.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:44.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:44.323 Found net devices under 0000:af:00.0: cvl_0_0 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:44.323 Found net devices under 0000:af:00.1: cvl_0_1 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:44.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:32:44.323 00:32:44.323 --- 10.0.0.2 ping statistics --- 00:32:44.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.323 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:32:44.323 00:32:44.323 --- 10.0.0.1 ping statistics --- 00:32:44.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.323 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=513866 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 513866 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 513866 ']' 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.323 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:44.324 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.324 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:44.324 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.324 [2024-12-16 12:54:09.765121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:44.324 [2024-12-16 12:54:09.765163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.324 [2024-12-16 12:54:09.838174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:44.324 [2024-12-16 12:54:09.878362] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.324 [2024-12-16 12:54:09.878404] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.324 [2024-12-16 12:54:09.878410] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.324 [2024-12-16 12:54:09.878416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.324 [2024-12-16 12:54:09.878422] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.324 [2024-12-16 12:54:09.878480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.324 [2024-12-16 12:54:09.878570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.324 [2024-12-16 12:54:09.878677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.324 [2024-12-16 12:54:09.878678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:44.324 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.324 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:32:44.324 12:54:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:44.324 [2024-12-16 12:54:10.164620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.324 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:44.324 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:44.324 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.324 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:44.583 Malloc1 00:32:44.583 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:44.842 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:44.842 12:54:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.101 [2024-12-16 12:54:11.042893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.101 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:45.359 12:54:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.618 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:45.618 fio-3.35 00:32:45.618 Starting 1 thread 00:32:48.178 00:32:48.178 test: (groupid=0, jobs=1): err= 0: pid=514304: Mon Dec 16 12:54:13 2024 00:32:48.178 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.3MiB/2005msec) 00:32:48.178 slat (nsec): min=1516, max=246164, avg=1690.03, stdev=2233.61 00:32:48.178 clat (usec): min=3063, max=10343, avg=5941.89, stdev=453.89 00:32:48.178 lat (usec): min=3098, max=10344, avg=5943.58, stdev=453.83 00:32:48.178 clat percentiles (usec): 00:32:48.178 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:32:48.178 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:32:48.178 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:32:48.178 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9241], 00:32:48.178 | 99.99th=[10159] 00:32:48.178 bw ( KiB/s): min=46808, max=48088, per=99.96%, avg=47606.00, stdev=607.94, samples=4 00:32:48.178 iops : min=11702, max=12022, avg=11901.50, stdev=151.99, samples=4 00:32:48.178 write: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.8MiB/2005msec); 0 zone resets 00:32:48.178 slat (nsec): min=1555, max=224421, avg=1757.03, stdev=1631.24 00:32:48.178 clat (usec): min=2417, max=9082, avg=4788.37, stdev=378.26 00:32:48.178 lat (usec): min=2432, max=9084, avg=4790.13, stdev=378.31 00:32:48.178 clat percentiles (usec): 00:32:48.178 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:32:48.178 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:32:48.178 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:32:48.178 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 7111], 99.95th=[ 7963], 00:32:48.178 | 99.99th=[ 8979] 00:32:48.178 bw ( KiB/s): min=47040, max=47744, per=100.00%, avg=47424.00, stdev=295.68, samples=4 00:32:48.178 iops : min=11760, max=11936, avg=11856.00, stdev=73.92, samples=4 00:32:48.178 lat (msec) : 4=0.83%, 10=99.16%, 20=0.01% 00:32:48.178 cpu : usr=71.86%, sys=27.05%, ctx=107, majf=0, minf=4 00:32:48.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:48.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:48.178 issued rwts: total=23873,23769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:48.178 00:32:48.178 Run status group 0 (all jobs): 00:32:48.178 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.3MiB (97.8MB), run=2005-2005msec 00:32:48.178 WRITE: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.8MiB (97.4MB), run=2005-2005msec 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:48.178 12:54:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:48.178 12:54:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:48.439 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:48.439 fio-3.35 00:32:48.439 Starting 1 thread 00:32:50.974 00:32:50.974 test: (groupid=0, jobs=1): err= 0: pid=514804: Mon Dec 16 12:54:16 2024 00:32:50.974 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2007msec) 00:32:50.974 slat (nsec): min=2337, max=92244, avg=2888.80, stdev=1474.25 00:32:50.974 clat (usec): min=1232, max=51640, avg=6890.43, stdev=3465.64 00:32:50.974 lat (usec): min=1235, max=51642, avg=6893.32, stdev=3465.67 00:32:50.974 clat percentiles (usec): 00:32:50.974 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5276], 00:32:50.974 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6652], 60.00th=[ 7111], 00:32:50.974 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9241], 00:32:50.974 | 99.00th=[11207], 99.50th=[44827], 99.90th=[49546], 99.95th=[50070], 00:32:50.974 | 99.99th=[50594] 00:32:50.974 bw ( KiB/s): min=78400, max=97760, per=50.54%, avg=87336.00, stdev=8376.77, samples=4 00:32:50.974 iops : min= 4900, max= 6110, avg=5458.50, stdev=523.55, samples=4 00:32:50.974 write: IOPS=6535, BW=102MiB/s (107MB/s)(178MiB/1747msec); 0 zone resets 00:32:50.974 slat (usec): min=28, max=260, avg=32.35, stdev= 6.27 00:32:50.974 clat (usec): min=3712, max=13508, avg=8541.88, stdev=1380.41 00:32:50.974 lat (usec): min=3742, max=13538, avg=8574.24, stdev=1381.09 00:32:50.974 clat percentiles (usec): 00:32:50.974 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7439], 00:32:50.974 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8717], 00:32:50.974 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[11076], 00:32:50.974 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13173], 99.95th=[13435], 00:32:50.974 | 99.99th=[13435] 00:32:50.974 bw ( KiB/s): min=80640, max=101920, per=86.85%, avg=90808.00, stdev=8922.45, samples=4 00:32:50.975 iops : min= 5040, max= 6370, avg=5675.50, stdev=557.65, samples=4 00:32:50.975 lat (msec) : 2=0.06%, 4=1.52%, 10=91.22%, 20=6.82%, 50=0.34% 00:32:50.975 lat (msec) : 100=0.04% 00:32:50.975 cpu : usr=82.80%, sys=14.21%, ctx=211, majf=0, minf=4 00:32:50.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:50.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.975 issued rwts: total=21676,11417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.975 00:32:50.975 Run status group 0 (all jobs): 00:32:50.975 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (355MB), run=2007-2007msec 00:32:50.975 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=178MiB (187MB), run=1747-1747msec 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:32:50.975 12:54:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:32:54.262 Nvme0n1 00:32:54.262 12:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cade5bb2-aa4a-4a4e-8122-070252df1a4f 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cade5bb2-aa4a-4a4e-8122-070252df1a4f 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cade5bb2-aa4a-4a4e-8122-070252df1a4f 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:56.796 12:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:57.055 { 00:32:57.055 "uuid": "cade5bb2-aa4a-4a4e-8122-070252df1a4f", 00:32:57.055 "name": "lvs_0", 00:32:57.055 "base_bdev": "Nvme0n1", 00:32:57.055 "total_data_clusters": 930, 00:32:57.055 "free_clusters": 930, 00:32:57.055 "block_size": 512, 00:32:57.055 "cluster_size": 1073741824 00:32:57.055 } 00:32:57.055 ]' 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cade5bb2-aa4a-4a4e-8122-070252df1a4f") .free_clusters' 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cade5bb2-aa4a-4a4e-8122-070252df1a4f") .cluster_size' 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:57.055 952320 00:32:57.055 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:57.622 b64650e8-4e27-4047-b490-d1003e21638b 00:32:57.622 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:57.622 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:57.881 12:54:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:58.140 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:58.141 12:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:58.399 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:58.400 fio-3.35 00:32:58.400 Starting 1 thread 00:33:00.934 00:33:00.934 test: (groupid=0, jobs=1): err= 0: pid=516495: Mon Dec 16 12:54:26 2024 00:33:00.934 read: IOPS=8122, BW=31.7MiB/s (33.3MB/s)(63.6MiB/2006msec) 00:33:00.934 slat (nsec): min=1519, max=121046, avg=1637.52, stdev=1225.07 00:33:00.934 clat (usec): min=936, max=169926, avg=8623.07, stdev=10239.11 00:33:00.934 lat (usec): min=938, max=169946, avg=8624.71, stdev=10239.30 00:33:00.934 clat percentiles (msec): 00:33:00.934 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:33:00.934 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:33:00.934 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:33:00.934 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:33:00.934 | 99.99th=[ 171] 00:33:00.934 bw ( KiB/s): min=22906, max=35840, per=99.85%, avg=32442.50, stdev=6360.63, samples=4 00:33:00.934 iops : min= 5726, max= 8960, avg=8110.50, stdev=1590.41, samples=4 00:33:00.934 write: IOPS=8124, BW=31.7MiB/s (33.3MB/s)(63.7MiB/2006msec); 0 zone resets 00:33:00.934 slat (nsec): min=1555, max=84647, avg=1710.67, stdev=743.66 00:33:00.934 clat (usec): min=244, max=168496, avg=7013.02, stdev=9559.28 00:33:00.934 lat (usec): min=246, max=168501, avg=7014.73, stdev=9559.46 00:33:00.934 clat percentiles (msec): 00:33:00.934 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:33:00.934 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:33:00.934 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:33:00.934 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 169], 99.95th=[ 169], 00:33:00.934 | 99.99th=[ 169] 00:33:00.934 bw ( KiB/s): min=23864, max=35392, per=99.89%, avg=32462.00, stdev=5732.19, samples=4 00:33:00.934 iops : min= 5966, max= 8848, avg=8115.50, stdev=1433.05, samples=4 00:33:00.934 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:33:00.934 lat (msec) : 2=0.04%, 4=0.25%, 10=99.13%, 20=0.17%, 250=0.39% 00:33:00.934 cpu : usr=70.47%, sys=28.68%, ctx=130, majf=0, minf=4 00:33:00.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:00.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:00.934 issued rwts: total=16294,16297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:00.934 00:33:00.934 Run status group 0 (all jobs): 00:33:00.934 READ: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=63.6MiB (66.7MB), run=2006-2006msec 00:33:00.934 WRITE: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=63.7MiB (66.8MB), run=2006-2006msec 00:33:00.934 12:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:00.934 12:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8593186c-27ca-4eb3-be85-ae497a88aba7 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8593186c-27ca-4eb3-be85-ae497a88aba7 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8593186c-27ca-4eb3-be85-ae497a88aba7 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:01.871 12:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:02.130 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:02.131 { 00:33:02.131 "uuid": "cade5bb2-aa4a-4a4e-8122-070252df1a4f", 00:33:02.131 "name": "lvs_0", 00:33:02.131 "base_bdev": "Nvme0n1", 00:33:02.131 "total_data_clusters": 930, 00:33:02.131 "free_clusters": 0, 00:33:02.131 "block_size": 512, 00:33:02.131 "cluster_size": 1073741824 00:33:02.131 }, 00:33:02.131 { 00:33:02.131 "uuid": "8593186c-27ca-4eb3-be85-ae497a88aba7", 00:33:02.131 "name": "lvs_n_0", 00:33:02.131 "base_bdev": "b64650e8-4e27-4047-b490-d1003e21638b", 00:33:02.131 "total_data_clusters": 237847, 00:33:02.131 "free_clusters": 237847, 00:33:02.131 "block_size": 512, 00:33:02.131 "cluster_size": 4194304 00:33:02.131 } 00:33:02.131 ]' 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8593186c-27ca-4eb3-be85-ae497a88aba7") .free_clusters' 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8593186c-27ca-4eb3-be85-ae497a88aba7") .cluster_size' 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:33:02.131 951388 00:33:02.131 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:02.699 04d7ce14-ae4b-44e6-8c29-17d917d35baa 00:33:02.699 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:02.958 12:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:03.217 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:03.476 12:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.735 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:03.735 fio-3.35 00:33:03.735 Starting 1 thread 00:33:06.268 00:33:06.268 test: (groupid=0, jobs=1): err= 0: pid=517498: Mon Dec 16 12:54:32 2024 00:33:06.268 read: IOPS=7897, BW=30.9MiB/s (32.3MB/s)(61.9MiB/2007msec) 00:33:06.268 slat (nsec): min=1530, max=100053, avg=1712.88, stdev=1126.40 00:33:06.268 clat (usec): min=3136, max=15136, avg=8907.13, stdev=771.97 00:33:06.268 lat (usec): min=3139, max=15137, avg=8908.84, stdev=771.91 00:33:06.268 clat percentiles (usec): 00:33:06.268 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:33:06.268 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:33:06.268 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:33:06.268 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13173], 99.95th=[14222], 00:33:06.268 | 99.99th=[15139] 00:33:06.268 bw ( KiB/s): min=30112, max=32280, per=99.93%, avg=31568.00, stdev=984.08, samples=4 00:33:06.268 iops : min= 7528, max= 8070, avg=7892.00, stdev=246.02, samples=4 00:33:06.268 write: IOPS=7871, BW=30.7MiB/s (32.2MB/s)(61.7MiB/2007msec); 0 zone resets 00:33:06.268 slat (nsec): min=1551, max=72907, avg=1780.52, stdev=716.49 00:33:06.268 clat (usec): min=1437, max=14354, avg=7208.16, stdev=651.09 00:33:06.268 lat (usec): min=1443, max=14356, avg=7209.94, stdev=651.09 00:33:06.268 clat percentiles (usec): 00:33:06.268 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:33:06.268 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7373], 00:33:06.268 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:33:06.268 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11338], 99.95th=[13304], 00:33:06.268 | 99.99th=[14222] 00:33:06.268 bw ( KiB/s): min=31168, max=31624, per=99.97%, avg=31476.00, stdev=207.85, samples=4 00:33:06.268 iops : min= 7792, max= 7906, avg=7869.00, stdev=51.96, samples=4 00:33:06.268 lat (msec) : 2=0.01%, 4=0.11%, 10=96.43%, 20=3.46% 00:33:06.268 cpu : usr=71.14%, sys=28.07%, ctx=140, majf=0, minf=4 00:33:06.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:06.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:06.268 issued rwts: total=15851,15798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:06.268 00:33:06.268 Run status group 0 (all jobs): 00:33:06.268 READ: bw=30.9MiB/s (32.3MB/s), 30.9MiB/s-30.9MiB/s (32.3MB/s-32.3MB/s), io=61.9MiB (64.9MB), run=2007-2007msec 00:33:06.268 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.7MiB (64.7MB), run=2007-2007msec 00:33:06.269 12:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:06.269 12:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:06.269 12:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:10.461 12:54:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:10.461 12:54:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:12.997 12:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:13.257 12:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:15.161 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:15.161 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.162 rmmod nvme_tcp 00:33:15.162 rmmod nvme_fabrics 00:33:15.162 rmmod nvme_keyring 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 513866 ']' 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 513866 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 513866 ']' 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 513866 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 513866 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 513866' 00:33:15.162 killing process with pid 513866 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 513866 00:33:15.162 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 513866 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.420 12:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.956 00:33:17.956 real 0m39.898s 00:33:17.956 user 2m39.880s 00:33:17.956 sys 0m8.873s 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.956 ************************************ 00:33:17.956 END TEST nvmf_fio_host 00:33:17.956 ************************************ 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.956 ************************************ 00:33:17.956 START TEST nvmf_failover 00:33:17.956 ************************************ 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:17.956 * Looking for test storage... 00:33:17.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:17.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.956 --rc genhtml_branch_coverage=1 00:33:17.956 --rc genhtml_function_coverage=1 00:33:17.956 --rc genhtml_legend=1 00:33:17.956 --rc geninfo_all_blocks=1 00:33:17.956 --rc geninfo_unexecuted_blocks=1 00:33:17.956 00:33:17.956 ' 00:33:17.956 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:17.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.956 --rc genhtml_branch_coverage=1 00:33:17.956 --rc genhtml_function_coverage=1 00:33:17.957 --rc genhtml_legend=1 00:33:17.957 --rc geninfo_all_blocks=1 00:33:17.957 --rc geninfo_unexecuted_blocks=1 00:33:17.957 00:33:17.957 ' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:17.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.957 --rc genhtml_branch_coverage=1 00:33:17.957 --rc genhtml_function_coverage=1 00:33:17.957 --rc genhtml_legend=1 00:33:17.957 --rc geninfo_all_blocks=1 00:33:17.957 --rc geninfo_unexecuted_blocks=1 00:33:17.957 00:33:17.957 ' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:17.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.957 --rc genhtml_branch_coverage=1 00:33:17.957 --rc genhtml_function_coverage=1 00:33:17.957 --rc genhtml_legend=1 00:33:17.957 --rc geninfo_all_blocks=1 00:33:17.957 --rc geninfo_unexecuted_blocks=1 00:33:17.957 00:33:17.957 ' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:17.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.957 12:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:23.234 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:23.234 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:23.234 Found net devices under 0000:af:00.0: cvl_0_0 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.234 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:23.235 Found net devices under 0000:af:00.1: cvl_0_1 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.235 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.494 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:33:23.753 00:33:23.753 --- 10.0.0.2 ping statistics --- 00:33:23.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.753 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:33:23.753 00:33:23.753 --- 10.0.0.1 ping statistics --- 00:33:23.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.753 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=522733 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 522733 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 522733 ']' 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:23.753 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:23.753 [2024-12-16 12:54:49.726447] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:23.753 [2024-12-16 12:54:49.726489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.753 [2024-12-16 12:54:49.799636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:24.013 [2024-12-16 12:54:49.838973] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.013 [2024-12-16 12:54:49.839012] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.013 [2024-12-16 12:54:49.839022] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.013 [2024-12-16 12:54:49.839030] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.013 [2024-12-16 12:54:49.839036] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.013 [2024-12-16 12:54:49.839174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.013 [2024-12-16 12:54:49.839287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.013 [2024-12-16 12:54:49.839288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.013 12:54:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:24.272 [2024-12-16 12:54:50.141003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.272 12:54:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:24.531 Malloc0 00:33:24.531 12:54:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:24.789 12:54:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.789 12:54:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.048 [2024-12-16 12:54:50.969986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.048 12:54:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:25.307 [2024-12-16 12:54:51.158512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:25.307 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:25.307 [2024-12-16 12:54:51.355127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=522980 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 522980 /var/tmp/bdevperf.sock 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 522980 ']' 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:25.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:25.566 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:25.825 NVMe0n1 00:33:26.084 12:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:26.343 00:33:26.343 12:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=523201 00:33:26.343 12:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:26.343 12:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:27.279 12:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.538 [2024-12-16 12:54:53.398545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 [2024-12-16 12:54:53.398594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 [2024-12-16 12:54:53.398602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 [2024-12-16 12:54:53.398608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 [2024-12-16 12:54:53.398615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 [2024-12-16 12:54:53.398621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 [2024-12-16 12:54:53.398628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1700 is same with the state(6) to be set 00:33:27.538 12:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:30.825 12:54:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:30.825 00:33:30.825 12:54:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:31.084 [2024-12-16 12:54:57.069772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 [2024-12-16 12:54:57.069880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2500 is same with the state(6) to be set 00:33:31.084 12:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:34.373 12:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.373 [2024-12-16 12:55:00.281526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.373 12:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:35.308 12:55:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:35.567 [2024-12-16 12:55:01.506208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 [2024-12-16 12:55:01.506382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be30c0 is same with the state(6) to be set 00:33:35.567 12:55:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 523201 00:33:42.148 { 00:33:42.148 "results": [ 00:33:42.148 { 00:33:42.148 "job": "NVMe0n1", 00:33:42.148 "core_mask": "0x1", 00:33:42.148 "workload": "verify", 00:33:42.148 "status": "finished", 00:33:42.148 "verify_range": { 00:33:42.148 "start": 0, 00:33:42.148 "length": 16384 00:33:42.148 }, 00:33:42.148 "queue_depth": 128, 00:33:42.148 "io_size": 4096, 00:33:42.148 "runtime": 15.011183, 00:33:42.148 "iops": 11201.782031436163, 00:33:42.148 "mibps": 43.75696106029751, 00:33:42.148 "io_failed": 16053, 00:33:42.148 "io_timeout": 0, 00:33:42.148 "avg_latency_us": 10409.611399804307, 00:33:42.148 "min_latency_us": 423.25333333333333, 00:33:42.148 "max_latency_us": 19348.72380952381 00:33:42.148 } 00:33:42.148 ], 00:33:42.148 "core_count": 1 00:33:42.148 } 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 522980 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 522980 ']' 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 522980 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 522980 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 522980' 00:33:42.148 killing process with pid 522980 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 522980 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 522980 00:33:42.148 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:42.148 [2024-12-16 12:54:51.428475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:42.148 [2024-12-16 12:54:51.428532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522980 ] 00:33:42.148 [2024-12-16 12:54:51.497781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.148 [2024-12-16 12:54:51.537215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.148 Running I/O for 15 seconds... 00:33:42.148 11357.00 IOPS, 44.36 MiB/s [2024-12-16T11:55:08.215Z] [2024-12-16 12:54:53.399069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.149 [2024-12-16 12:54:53.399365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.149 [2024-12-16 12:54:53.399716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.149 [2024-12-16 12:54:53.399724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.399946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.399960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.399974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.399988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.399995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.400002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.400017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.400031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.150 [2024-12-16 12:54:53.400045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.150 [2024-12-16 12:54:53.400301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.150 [2024-12-16 12:54:53.400308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.151 [2024-12-16 12:54:53.400322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.151 [2024-12-16 12:54:53.400336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101000 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.151 [2024-12-16 12:54:53.400407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.151 [2024-12-16 12:54:53.400424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.151 [2024-12-16 12:54:53.400437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.151 [2024-12-16 12:54:53.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaade0 is same with the state(6) to be set 00:33:42.151 [2024-12-16 12:54:53.400612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101008 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101016 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101024 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101032 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101040 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101048 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101056 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101064 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101072 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101080 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101088 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101120 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.400978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.400983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.400989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.401002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.401007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.401012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.401018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.401024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.401029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.401034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.401040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.401047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.401051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.151 [2024-12-16 12:54:53.401057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:33:42.151 [2024-12-16 12:54:53.401063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.151 [2024-12-16 12:54:53.401070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.151 [2024-12-16 12:54:53.401074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100200 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100208 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100264 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100272 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100312 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100336 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100344 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100352 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100360 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100368 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100376 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.152 [2024-12-16 12:54:53.401621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:33:42.152 [2024-12-16 12:54:53.401628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.152 [2024-12-16 12:54:53.401634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.152 [2024-12-16 12:54:53.401640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100432 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100440 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100448 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100456 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.401871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.401876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.401881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.401887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100136 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.153 [2024-12-16 12:54:53.406736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.153 [2024-12-16 12:54:53.406740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:33:42.153 [2024-12-16 12:54:53.406747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.153 [2024-12-16 12:54:53.406753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.406978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.406983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.406988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.406994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.154 [2024-12-16 12:54:53.407414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.154 [2024-12-16 12:54:53.407419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:33:42.154 [2024-12-16 12:54:53.407425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.154 [2024-12-16 12:54:53.407431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100144 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100152 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100160 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100168 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100176 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100192 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.155 [2024-12-16 12:54:53.407960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.155 [2024-12-16 12:54:53.407965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.155 [2024-12-16 12:54:53.407970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:33:42.155 [2024-12-16 12:54:53.407977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.407983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.407988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.407999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100968 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100976 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.408322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100984 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.408328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.408334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.408339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.412865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100992 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.412877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.412888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.156 [2024-12-16 12:54:53.412894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.156 [2024-12-16 12:54:53.412901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101000 len:8 PRP1 0x0 PRP2 0x0 00:33:42.156 [2024-12-16 12:54:53.412910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:53.412955] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xccd5b0 was disconnected and freed. reset controller. 00:33:42.156 [2024-12-16 12:54:53.412966] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:42.156 [2024-12-16 12:54:53.412976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:42.156 [2024-12-16 12:54:53.413019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaade0 (9): Bad file descriptor 00:33:42.156 [2024-12-16 12:54:53.416732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:42.156 [2024-12-16 12:54:53.445502] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:42.156 11299.00 IOPS, 44.14 MiB/s [2024-12-16T11:55:08.223Z] 11362.33 IOPS, 44.38 MiB/s [2024-12-16T11:55:08.223Z] 11376.50 IOPS, 44.44 MiB/s [2024-12-16T11:55:08.223Z] [2024-12-16 12:54:57.070927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.070977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.070985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.070994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.071009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.071024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.071039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.071055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.071069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.156 [2024-12-16 12:54:57.071083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.156 [2024-12-16 12:54:57.071090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.157 [2024-12-16 12:54:57.071104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.157 [2024-12-16 12:54:57.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.157 [2024-12-16 12:54:57.071602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.071988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.071996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.158 [2024-12-16 12:54:57.072184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.158 [2024-12-16 12:54:57.072192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.159 [2024-12-16 12:54:57.072206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.159 [2024-12-16 12:54:57.072220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.159 [2024-12-16 12:54:57.072236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.159 [2024-12-16 12:54:57.072250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.159 [2024-12-16 12:54:57.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49088 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49096 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49104 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49112 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49120 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49128 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49136 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49144 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49160 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49168 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49176 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49184 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49192 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49200 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49208 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49216 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49224 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49232 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49240 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.159 [2024-12-16 12:54:57.072755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49248 len:8 PRP1 0x0 PRP2 0x0 00:33:42.159 [2024-12-16 12:54:57.072761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.159 [2024-12-16 12:54:57.072767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.159 [2024-12-16 12:54:57.072775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49256 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49264 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49272 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49280 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49288 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49296 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49304 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49312 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49320 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.072982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.072987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49328 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.072993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.072999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49336 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49344 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49352 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49376 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.160 [2024-12-16 12:54:57.073154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.160 [2024-12-16 12:54:57.073159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49384 len:8 PRP1 0x0 PRP2 0x0 00:33:42.160 [2024-12-16 12:54:57.073165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073205] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcd6a10 was disconnected and freed. reset controller. 00:33:42.160 [2024-12-16 12:54:57.073214] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:42.160 [2024-12-16 12:54:57.073233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.160 [2024-12-16 12:54:57.073241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.160 [2024-12-16 12:54:57.073255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.160 [2024-12-16 12:54:57.073268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.160 [2024-12-16 12:54:57.073281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:54:57.073288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:42.160 [2024-12-16 12:54:57.073308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaade0 (9): Bad file descriptor 00:33:42.160 [2024-12-16 12:54:57.076033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:42.160 [2024-12-16 12:54:57.231738] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:42.160 11010.20 IOPS, 43.01 MiB/s [2024-12-16T11:55:08.227Z] 11073.83 IOPS, 43.26 MiB/s [2024-12-16T11:55:08.227Z] 11139.00 IOPS, 43.51 MiB/s [2024-12-16T11:55:08.227Z] 11159.12 IOPS, 43.59 MiB/s [2024-12-16T11:55:08.227Z] 11195.11 IOPS, 43.73 MiB/s [2024-12-16T11:55:08.227Z] [2024-12-16 12:55:01.506858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.160 [2024-12-16 12:55:01.506890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:55:01.506905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.160 [2024-12-16 12:55:01.506912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:55:01.506921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.160 [2024-12-16 12:55:01.506928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:55:01.506936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.160 [2024-12-16 12:55:01.506943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:55:01.506956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.160 [2024-12-16 12:55:01.506963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.160 [2024-12-16 12:55:01.506971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.506978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.506987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.506993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.161 [2024-12-16 12:55:01.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.161 [2024-12-16 12:55:01.507489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.161 [2024-12-16 12:55:01.507496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.507986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.507994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.508001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.508008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.508014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.508022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.508028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.508036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.508044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.508051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.508057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.508065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.162 [2024-12-16 12:55:01.508072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.162 [2024-12-16 12:55:01.508080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.163 [2024-12-16 12:55:01.508492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106608 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.163 [2024-12-16 12:55:01.508542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106616 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.163 [2024-12-16 12:55:01.508565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106624 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.163 [2024-12-16 12:55:01.508588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106632 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.163 [2024-12-16 12:55:01.508612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106640 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.163 [2024-12-16 12:55:01.508635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106648 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.163 [2024-12-16 12:55:01.508657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.163 [2024-12-16 12:55:01.508662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106656 len:8 PRP1 0x0 PRP2 0x0 00:33:42.163 [2024-12-16 12:55:01.508668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.163 [2024-12-16 12:55:01.508675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106664 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106672 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106680 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105776 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105784 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105792 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105800 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105808 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105816 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.164 [2024-12-16 12:55:01.508884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.164 [2024-12-16 12:55:01.508889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105824 len:8 PRP1 0x0 PRP2 0x0 00:33:42.164 [2024-12-16 12:55:01.508895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508933] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcd8460 was disconnected and freed. reset controller. 00:33:42.164 [2024-12-16 12:55:01.508942] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:42.164 [2024-12-16 12:55:01.508962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.164 [2024-12-16 12:55:01.508970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.164 [2024-12-16 12:55:01.508984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.508991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.164 [2024-12-16 12:55:01.508997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.509004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.164 [2024-12-16 12:55:01.509013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.164 [2024-12-16 12:55:01.509020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:42.164 [2024-12-16 12:55:01.509042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaade0 (9): Bad file descriptor 00:33:42.164 [2024-12-16 12:55:01.511772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:42.164 [2024-12-16 12:55:01.668558] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:42.164 11051.60 IOPS, 43.17 MiB/s [2024-12-16T11:55:08.231Z] 11094.64 IOPS, 43.34 MiB/s [2024-12-16T11:55:08.231Z] 11134.50 IOPS, 43.49 MiB/s [2024-12-16T11:55:08.231Z] 11159.85 IOPS, 43.59 MiB/s [2024-12-16T11:55:08.231Z] 11183.50 IOPS, 43.69 MiB/s [2024-12-16T11:55:08.231Z] 11201.67 IOPS, 43.76 MiB/s 00:33:42.164 Latency(us) 00:33:42.164 [2024-12-16T11:55:08.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.164 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:42.164 Verification LBA range: start 0x0 length 0x4000 00:33:42.164 NVMe0n1 : 15.01 11201.78 43.76 1069.40 0.00 10409.61 423.25 19348.72 00:33:42.164 [2024-12-16T11:55:08.231Z] =================================================================================================================== 00:33:42.164 [2024-12-16T11:55:08.231Z] Total : 11201.78 43.76 1069.40 0.00 10409.61 423.25 19348.72 00:33:42.164 Received shutdown signal, test time was about 15.000000 seconds 00:33:42.164 00:33:42.164 Latency(us) 00:33:42.164 [2024-12-16T11:55:08.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.164 [2024-12-16T11:55:08.231Z] =================================================================================================================== 00:33:42.164 [2024-12-16T11:55:08.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=525487 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 525487 /var/tmp/bdevperf.sock 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 525487 ']' 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:42.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:42.164 12:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:42.164 [2024-12-16 12:55:08.026464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:42.164 12:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:42.424 [2024-12-16 12:55:08.223043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:42.424 12:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.683 NVMe0n1 00:33:42.683 12:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.942 00:33:42.942 12:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:43.200 00:33:43.200 12:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:43.200 12:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:43.459 12:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:43.717 12:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:47.006 12:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:47.006 12:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:47.006 12:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=526325 00:33:47.006 12:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:47.006 12:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 526325 00:33:47.943 { 00:33:47.943 "results": [ 00:33:47.943 { 00:33:47.943 "job": "NVMe0n1", 00:33:47.943 "core_mask": "0x1", 00:33:47.943 "workload": "verify", 00:33:47.943 "status": "finished", 00:33:47.943 "verify_range": { 00:33:47.943 "start": 0, 00:33:47.943 "length": 16384 00:33:47.943 }, 00:33:47.943 "queue_depth": 128, 00:33:47.943 "io_size": 4096, 00:33:47.943 "runtime": 1.010082, 00:33:47.943 "iops": 11252.551773024368, 00:33:47.943 "mibps": 43.95528036337644, 00:33:47.943 "io_failed": 0, 00:33:47.943 "io_timeout": 0, 00:33:47.943 "avg_latency_us": 11332.138648433507, 00:33:47.943 "min_latency_us": 1599.3904761904762, 00:33:47.943 "max_latency_us": 13294.445714285714 00:33:47.943 } 00:33:47.943 ], 00:33:47.943 "core_count": 1 00:33:47.943 } 00:33:47.943 12:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:47.943 [2024-12-16 12:55:07.657794] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:47.943 [2024-12-16 12:55:07.657849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525487 ] 00:33:47.943 [2024-12-16 12:55:07.726829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.944 [2024-12-16 12:55:07.762585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.944 [2024-12-16 12:55:09.567169] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:47.944 [2024-12-16 12:55:09.567214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.944 [2024-12-16 12:55:09.567225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.944 [2024-12-16 12:55:09.567233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.944 [2024-12-16 12:55:09.567248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.944 [2024-12-16 12:55:09.567256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.944 [2024-12-16 12:55:09.567263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.944 [2024-12-16 12:55:09.567270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.944 [2024-12-16 12:55:09.567277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.944 [2024-12-16 12:55:09.567284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.944 [2024-12-16 12:55:09.567309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.944 [2024-12-16 12:55:09.567323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a3de0 (9): Bad file descriptor 00:33:47.944 [2024-12-16 12:55:09.616224] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:47.944 Running I/O for 1 seconds... 00:33:47.944 11222.00 IOPS, 43.84 MiB/s 00:33:47.944 Latency(us) 00:33:47.944 [2024-12-16T11:55:14.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.944 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:47.944 Verification LBA range: start 0x0 length 0x4000 00:33:47.944 NVMe0n1 : 1.01 11252.55 43.96 0.00 0.00 11332.14 1599.39 13294.45 00:33:47.944 [2024-12-16T11:55:14.011Z] =================================================================================================================== 00:33:47.944 [2024-12-16T11:55:14.011Z] Total : 11252.55 43.96 0.00 0.00 11332.14 1599.39 13294.45 00:33:47.944 12:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:47.944 12:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:48.203 12:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:48.462 12:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:48.462 12:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:48.462 12:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:48.721 12:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 525487 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 525487 ']' 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 525487 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 525487 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 525487' 00:33:52.010 killing process with pid 525487 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 525487 00:33:52.010 12:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 525487 00:33:52.269 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:52.269 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.528 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:52.528 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:52.528 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:52.528 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:52.529 rmmod nvme_tcp 00:33:52.529 rmmod nvme_fabrics 00:33:52.529 rmmod nvme_keyring 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 522733 ']' 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 522733 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 522733 ']' 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 522733 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 522733 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 522733' 00:33:52.529 killing process with pid 522733 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 522733 00:33:52.529 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 522733 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.787 12:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.692 12:55:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:54.692 00:33:54.692 real 0m37.245s 00:33:54.692 user 1m57.626s 00:33:54.692 sys 0m7.887s 00:33:54.692 12:55:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:54.692 12:55:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:54.692 ************************************ 00:33:54.692 END TEST nvmf_failover 00:33:54.692 ************************************ 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.953 ************************************ 00:33:54.953 START TEST nvmf_host_discovery 00:33:54.953 ************************************ 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:54.953 * Looking for test storage... 00:33:54.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.953 --rc genhtml_branch_coverage=1 00:33:54.953 --rc genhtml_function_coverage=1 00:33:54.953 --rc genhtml_legend=1 00:33:54.953 --rc geninfo_all_blocks=1 00:33:54.953 --rc geninfo_unexecuted_blocks=1 00:33:54.953 00:33:54.953 ' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.953 --rc genhtml_branch_coverage=1 00:33:54.953 --rc genhtml_function_coverage=1 00:33:54.953 --rc genhtml_legend=1 00:33:54.953 --rc geninfo_all_blocks=1 00:33:54.953 --rc geninfo_unexecuted_blocks=1 00:33:54.953 00:33:54.953 ' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.953 --rc genhtml_branch_coverage=1 00:33:54.953 --rc genhtml_function_coverage=1 00:33:54.953 --rc genhtml_legend=1 00:33:54.953 --rc geninfo_all_blocks=1 00:33:54.953 --rc geninfo_unexecuted_blocks=1 00:33:54.953 00:33:54.953 ' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.953 --rc genhtml_branch_coverage=1 00:33:54.953 --rc genhtml_function_coverage=1 00:33:54.953 --rc genhtml_legend=1 00:33:54.953 --rc geninfo_all_blocks=1 00:33:54.953 --rc geninfo_unexecuted_blocks=1 00:33:54.953 00:33:54.953 ' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.953 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:54.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.954 12:55:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.954 12:55:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:54.954 12:55:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:54.954 12:55:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.954 12:55:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.526 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:01.527 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:01.527 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:01.527 Found net devices under 0000:af:00.0: cvl_0_0 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:01.527 Found net devices under 0000:af:00.1: cvl_0_1 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:01.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:34:01.527 00:34:01.527 --- 10.0.0.2 ping statistics --- 00:34:01.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.527 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:34:01.527 00:34:01.527 --- 10.0.0.1 ping statistics --- 00:34:01.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.527 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=530686 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 530686 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 530686 ']' 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:01.527 12:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.527 [2024-12-16 12:55:26.897560] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:01.527 [2024-12-16 12:55:26.897610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.527 [2024-12-16 12:55:26.968837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.527 [2024-12-16 12:55:27.006125] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.527 [2024-12-16 12:55:27.006165] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.527 [2024-12-16 12:55:27.006174] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.528 [2024-12-16 12:55:27.006181] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.528 [2024-12-16 12:55:27.006187] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.528 [2024-12-16 12:55:27.006210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 [2024-12-16 12:55:27.142722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 [2024-12-16 12:55:27.154932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 null0 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 null1 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=530705 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 530705 /tmp/host.sock 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 530705 ']' 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:01.528 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 [2024-12-16 12:55:27.230931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:01.528 [2024-12-16 12:55:27.230970] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530705 ] 00:34:01.528 [2024-12-16 12:55:27.297649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.528 [2024-12-16 12:55:27.337982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.528 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:01.787 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 [2024-12-16 12:55:27.740434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:34:02.047 12:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:02.615 [2024-12-16 12:55:28.491518] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:02.615 [2024-12-16 12:55:28.491541] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:02.615 [2024-12-16 12:55:28.491553] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:02.615 [2024-12-16 12:55:28.617932] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:02.874 [2024-12-16 12:55:28.722712] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:02.874 [2024-12-16 12:55:28.722730] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:03.134 12:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:03.134 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.135 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.394 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.653 [2024-12-16 12:55:29.469051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.653 [2024-12-16 12:55:29.470001] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:03.653 [2024-12-16 12:55:29.470022] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.653 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.654 [2024-12-16 12:55:29.597733] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:03.654 12:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:03.913 [2024-12-16 12:55:29.856949] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:03.913 [2024-12-16 12:55:29.856966] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:03.913 [2024-12-16 12:55:29.856971] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.851 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.852 [2024-12-16 12:55:30.705324] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:04.852 [2024-12-16 12:55:30.705350] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:04.852 [2024-12-16 12:55:30.707004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.852 [2024-12-16 12:55:30.707024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-12-16 12:55:30.707034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.852 [2024-12-16 12:55:30.707041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-12-16 12:55:30.707050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.852 [2024-12-16 12:55:30.707056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-12-16 12:55:30.707064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.852 [2024-12-16 12:55:30.707071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-12-16 12:55:30.707079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:04.852 [2024-12-16 12:55:30.717016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.727053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.727239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.727254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.852 [2024-12-16 12:55:30.727263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 [2024-12-16 12:55:30.727274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.727284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.852 [2024-12-16 12:55:30.727291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.852 [2024-12-16 12:55:30.727299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.852 [2024-12-16 12:55:30.727310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.852 [2024-12-16 12:55:30.737124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.737244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.737256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.852 [2024-12-16 12:55:30.737264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 [2024-12-16 12:55:30.737274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.737283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.852 [2024-12-16 12:55:30.737289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.852 [2024-12-16 12:55:30.737296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.852 [2024-12-16 12:55:30.737306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.852 [2024-12-16 12:55:30.747175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.747354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.747366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.852 [2024-12-16 12:55:30.747373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 [2024-12-16 12:55:30.747383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.747393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.852 [2024-12-16 12:55:30.747399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.852 [2024-12-16 12:55:30.747406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.852 [2024-12-16 12:55:30.747419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.852 [2024-12-16 12:55:30.757227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.757340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.757352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.852 [2024-12-16 12:55:30.757360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 [2024-12-16 12:55:30.757370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.757379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.852 [2024-12-16 12:55:30.757385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.852 [2024-12-16 12:55:30.757392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.852 [2024-12-16 12:55:30.757401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.852 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:04.852 [2024-12-16 12:55:30.767280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.767397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.767408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.852 [2024-12-16 12:55:30.767416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 [2024-12-16 12:55:30.767426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.767435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.852 [2024-12-16 12:55:30.767441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.852 [2024-12-16 12:55:30.767448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.852 [2024-12-16 12:55:30.767458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.852 [2024-12-16 12:55:30.777329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.777443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.777456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.852 [2024-12-16 12:55:30.777464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.852 [2024-12-16 12:55:30.777474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.852 [2024-12-16 12:55:30.777484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.852 [2024-12-16 12:55:30.777490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.852 [2024-12-16 12:55:30.777497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.852 [2024-12-16 12:55:30.777506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.852 [2024-12-16 12:55:30.787383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:04.852 [2024-12-16 12:55:30.787544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.852 [2024-12-16 12:55:30.787557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7aa80 with addr=10.0.0.2, port=4420 00:34:04.853 [2024-12-16 12:55:30.787564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7aa80 is same with the state(6) to be set 00:34:04.853 [2024-12-16 12:55:30.787574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7aa80 (9): Bad file descriptor 00:34:04.853 [2024-12-16 12:55:30.787589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.853 [2024-12-16 12:55:30.787596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.853 [2024-12-16 12:55:30.787602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.853 [2024-12-16 12:55:30.787611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.853 [2024-12-16 12:55:30.791221] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:04.853 [2024-12-16 12:55:30.791237] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.853 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:05.112 12:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.112 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:05.112 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:05.112 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:05.112 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:05.112 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:05.112 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.113 12:55:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.492 [2024-12-16 12:55:32.120233] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:06.492 [2024-12-16 12:55:32.120249] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:06.492 [2024-12-16 12:55:32.120259] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:06.492 [2024-12-16 12:55:32.206518] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:06.492 [2024-12-16 12:55:32.307213] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:06.492 [2024-12-16 12:55:32.307239] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.492 request: 00:34:06.492 { 00:34:06.492 "name": "nvme", 00:34:06.492 "trtype": "tcp", 00:34:06.492 "traddr": "10.0.0.2", 00:34:06.492 "adrfam": "ipv4", 00:34:06.492 "trsvcid": "8009", 00:34:06.492 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:06.492 "wait_for_attach": true, 00:34:06.492 "method": "bdev_nvme_start_discovery", 00:34:06.492 "req_id": 1 00:34:06.492 } 00:34:06.492 Got JSON-RPC error response 00:34:06.492 response: 00:34:06.492 { 00:34:06.492 "code": -17, 00:34:06.492 "message": "File exists" 00:34:06.492 } 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.492 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.493 request: 00:34:06.493 { 00:34:06.493 "name": "nvme_second", 00:34:06.493 "trtype": "tcp", 00:34:06.493 "traddr": "10.0.0.2", 00:34:06.493 "adrfam": "ipv4", 00:34:06.493 "trsvcid": "8009", 00:34:06.493 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:06.493 "wait_for_attach": true, 00:34:06.493 "method": "bdev_nvme_start_discovery", 00:34:06.493 "req_id": 1 00:34:06.493 } 00:34:06.493 Got JSON-RPC error response 00:34:06.493 response: 00:34:06.493 { 00:34:06.493 "code": -17, 00:34:06.493 "message": "File exists" 00:34:06.493 } 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.493 12:55:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.871 [2024-12-16 12:55:33.543151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.871 [2024-12-16 12:55:33.543178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9e250 with addr=10.0.0.2, port=8010 00:34:07.871 [2024-12-16 12:55:33.543191] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:07.871 [2024-12-16 12:55:33.543198] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:07.871 [2024-12-16 12:55:33.543205] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:08.807 [2024-12-16 12:55:34.545548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.807 [2024-12-16 12:55:34.545572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c736b0 with addr=10.0.0.2, port=8010 00:34:08.807 [2024-12-16 12:55:34.545584] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:08.807 [2024-12-16 12:55:34.545590] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:08.807 [2024-12-16 12:55:34.545596] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:09.744 [2024-12-16 12:55:35.547738] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:09.744 request: 00:34:09.744 { 00:34:09.744 "name": "nvme_second", 00:34:09.744 "trtype": "tcp", 00:34:09.744 "traddr": "10.0.0.2", 00:34:09.744 "adrfam": "ipv4", 00:34:09.744 "trsvcid": "8010", 00:34:09.744 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:09.744 "wait_for_attach": false, 00:34:09.744 "attach_timeout_ms": 3000, 00:34:09.744 "method": "bdev_nvme_start_discovery", 00:34:09.744 "req_id": 1 00:34:09.744 } 00:34:09.744 Got JSON-RPC error response 00:34:09.744 response: 00:34:09.744 { 00:34:09.744 "code": -110, 00:34:09.744 "message": "Connection timed out" 00:34:09.744 } 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 530705 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.744 rmmod nvme_tcp 00:34:09.744 rmmod nvme_fabrics 00:34:09.744 rmmod nvme_keyring 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 530686 ']' 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 530686 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 530686 ']' 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 530686 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 530686 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 530686' 00:34:09.744 killing process with pid 530686 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 530686 00:34:09.744 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 530686 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.003 12:55:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.908 12:55:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.908 00:34:11.908 real 0m17.182s 00:34:11.908 user 0m20.561s 00:34:11.908 sys 0m5.769s 00:34:12.168 12:55:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.168 12:55:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.168 ************************************ 00:34:12.168 END TEST nvmf_host_discovery 00:34:12.168 ************************************ 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.168 ************************************ 00:34:12.168 START TEST nvmf_host_multipath_status 00:34:12.168 ************************************ 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:12.168 * Looking for test storage... 00:34:12.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:12.168 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:12.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.169 --rc genhtml_branch_coverage=1 00:34:12.169 --rc genhtml_function_coverage=1 00:34:12.169 --rc genhtml_legend=1 00:34:12.169 --rc geninfo_all_blocks=1 00:34:12.169 --rc geninfo_unexecuted_blocks=1 00:34:12.169 00:34:12.169 ' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:12.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.169 --rc genhtml_branch_coverage=1 00:34:12.169 --rc genhtml_function_coverage=1 00:34:12.169 --rc genhtml_legend=1 00:34:12.169 --rc geninfo_all_blocks=1 00:34:12.169 --rc geninfo_unexecuted_blocks=1 00:34:12.169 00:34:12.169 ' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:12.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.169 --rc genhtml_branch_coverage=1 00:34:12.169 --rc genhtml_function_coverage=1 00:34:12.169 --rc genhtml_legend=1 00:34:12.169 --rc geninfo_all_blocks=1 00:34:12.169 --rc geninfo_unexecuted_blocks=1 00:34:12.169 00:34:12.169 ' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:12.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.169 --rc genhtml_branch_coverage=1 00:34:12.169 --rc genhtml_function_coverage=1 00:34:12.169 --rc genhtml_legend=1 00:34:12.169 --rc geninfo_all_blocks=1 00:34:12.169 --rc geninfo_unexecuted_blocks=1 00:34:12.169 00:34:12.169 ' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:12.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.169 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.429 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:12.429 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:12.429 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.429 12:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:18.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:18.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:18.998 Found net devices under 0000:af:00.0: cvl_0_0 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.998 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:18.999 Found net devices under 0000:af:00.1: cvl_0_1 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.999 12:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:34:18.999 00:34:18.999 --- 10.0.0.2 ping statistics --- 00:34:18.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.999 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:34:18.999 00:34:18.999 --- 10.0.0.1 ping statistics --- 00:34:18.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.999 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=535682 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 535682 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 535682 ']' 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:18.999 [2024-12-16 12:55:44.143446] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:18.999 [2024-12-16 12:55:44.143489] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.999 [2024-12-16 12:55:44.215694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:18.999 [2024-12-16 12:55:44.255517] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.999 [2024-12-16 12:55:44.255556] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.999 [2024-12-16 12:55:44.255563] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.999 [2024-12-16 12:55:44.255569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.999 [2024-12-16 12:55:44.255574] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.999 [2024-12-16 12:55:44.255638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.999 [2024-12-16 12:55:44.255637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=535682 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:18.999 [2024-12-16 12:55:44.546179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:18.999 Malloc0 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:18.999 12:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.257 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.515 [2024-12-16 12:55:45.361930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.515 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:19.515 [2024-12-16 12:55:45.562432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=535928 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 535928 /var/tmp/bdevperf.sock 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 535928 ']' 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:19.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:19.773 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:19.774 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:19.774 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:19.774 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:19.774 12:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:20.032 12:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:20.290 Nvme0n1 00:34:20.290 12:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:20.859 Nvme0n1 00:34:20.859 12:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:20.859 12:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:22.765 12:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:22.765 12:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:23.024 12:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:23.283 12:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:24.220 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:24.220 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:24.220 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.220 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:24.480 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.738 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.738 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:24.738 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.738 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:24.997 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.997 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:24.997 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.997 12:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:25.256 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:25.514 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:25.773 12:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:26.709 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:26.709 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:26.709 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.709 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.967 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.967 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:26.967 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.967 12:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:27.226 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.226 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:27.226 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.226 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.484 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.743 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.743 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:27.743 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.743 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:28.001 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.001 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:28.001 12:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:28.259 12:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:28.517 12:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:29.452 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:29.452 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:29.452 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.452 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:29.710 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.710 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:29.710 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.710 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.968 12:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:30.227 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.227 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:30.227 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.227 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:30.486 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.486 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:30.486 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.486 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:30.744 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.744 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:30.744 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:31.002 12:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:31.002 12:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.383 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.641 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.641 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.642 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.642 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:32.642 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.642 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:32.642 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.642 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:32.900 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.900 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:32.900 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.900 12:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:33.159 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.159 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:33.159 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.159 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:33.417 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:33.417 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:33.417 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:33.676 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:33.933 12:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:34.870 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:34.870 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:34.870 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.870 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:35.129 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:35.129 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:35.129 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.129 12:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:35.129 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:35.129 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:35.129 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.129 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:35.387 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.387 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:35.387 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.387 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:35.646 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.646 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:35.646 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.646 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:35.905 12:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:36.164 12:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:36.422 12:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:37.360 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:37.360 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:37.360 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.360 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.619 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.619 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:37.619 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.619 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.877 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.877 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.877 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.877 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:38.136 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.136 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:38.136 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.136 12:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.136 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.136 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:38.136 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.136 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.395 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.395 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:38.395 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.395 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.654 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.654 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:38.913 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:38.913 12:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:39.172 12:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:39.430 12:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:40.367 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:40.367 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:40.367 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.367 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.626 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:40.885 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.885 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:40.885 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.885 12:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.143 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.143 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:41.143 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.143 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.402 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.402 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:41.402 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.402 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:41.661 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.661 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:41.661 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:41.661 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:41.919 12:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:42.856 12:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:42.856 12:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:42.856 12:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.856 12:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:43.115 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:43.115 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:43.115 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:43.115 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.373 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.373 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:43.373 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.373 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:43.632 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.632 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:43.632 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.632 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:43.892 12:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.151 12:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.151 12:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:44.151 12:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:44.409 12:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:44.668 12:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:45.603 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:45.603 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:45.603 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.603 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:45.862 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.862 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:45.862 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.862 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.121 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.121 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.121 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.121 12:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.379 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:46.638 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.638 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:46.638 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.638 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:46.896 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.896 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:46.896 12:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:47.155 12:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:47.414 12:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:48.349 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:48.350 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:48.350 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.350 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.609 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:48.867 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.867 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:48.867 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.867 12:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:49.126 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.126 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:49.126 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.126 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:49.384 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.384 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:49.384 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.384 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 535928 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 535928 ']' 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 535928 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 535928 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 535928' 00:34:49.643 killing process with pid 535928 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 535928 00:34:49.643 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 535928 00:34:49.643 { 00:34:49.643 "results": [ 00:34:49.643 { 00:34:49.643 "job": "Nvme0n1", 00:34:49.643 "core_mask": "0x4", 00:34:49.643 "workload": "verify", 00:34:49.643 "status": "terminated", 00:34:49.643 "verify_range": { 00:34:49.643 "start": 0, 00:34:49.643 "length": 16384 00:34:49.643 }, 00:34:49.643 "queue_depth": 128, 00:34:49.643 "io_size": 4096, 00:34:49.643 "runtime": 28.774567, 00:34:49.643 "iops": 10672.827848286996, 00:34:49.643 "mibps": 41.690733782371076, 00:34:49.643 "io_failed": 0, 00:34:49.643 "io_timeout": 0, 00:34:49.643 "avg_latency_us": 11954.609649517632, 00:34:49.643 "min_latency_us": 565.6380952380953, 00:34:49.643 "max_latency_us": 3067833.782857143 00:34:49.643 } 00:34:49.643 ], 00:34:49.643 "core_count": 1 00:34:49.643 } 00:34:49.924 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 535928 00:34:49.924 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:49.924 [2024-12-16 12:55:45.626770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:49.924 [2024-12-16 12:55:45.626824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535928 ] 00:34:49.924 [2024-12-16 12:55:45.693643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.924 [2024-12-16 12:55:45.732173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.924 [2024-12-16 12:55:46.594907] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:34:49.924 Running I/O for 90 seconds... 00:34:49.924 11444.00 IOPS, 44.70 MiB/s [2024-12-16T11:56:15.991Z] 11595.00 IOPS, 45.29 MiB/s [2024-12-16T11:56:15.991Z] 11554.67 IOPS, 45.14 MiB/s [2024-12-16T11:56:15.991Z] 11577.75 IOPS, 45.23 MiB/s [2024-12-16T11:56:15.991Z] 11581.60 IOPS, 45.24 MiB/s [2024-12-16T11:56:15.991Z] 11578.00 IOPS, 45.23 MiB/s [2024-12-16T11:56:15.991Z] 11603.00 IOPS, 45.32 MiB/s [2024-12-16T11:56:15.991Z] 11612.75 IOPS, 45.36 MiB/s [2024-12-16T11:56:15.991Z] 11621.89 IOPS, 45.40 MiB/s [2024-12-16T11:56:15.991Z] 11621.90 IOPS, 45.40 MiB/s [2024-12-16T11:56:15.991Z] 11636.91 IOPS, 45.46 MiB/s [2024-12-16T11:56:15.991Z] 11637.75 IOPS, 45.46 MiB/s [2024-12-16T11:56:15.991Z] [2024-12-16 12:55:59.516486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.924 [2024-12-16 12:55:59.516524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.925 [2024-12-16 12:55:59.516667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.516986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.516998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.925 [2024-12-16 12:55:59.517494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.925 [2024-12-16 12:55:59.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.517899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.517917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.517984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.517990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.926 [2024-12-16 12:55:59.518925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.518991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.518998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.519010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.926 [2024-12-16 12:55:59.519016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.926 [2024-12-16 12:55:59.519028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.927 [2024-12-16 12:55:59.519511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.927 [2024-12-16 12:55:59.519919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.927 [2024-12-16 12:55:59.519925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.519937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.519944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.519956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.519967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.519980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.519986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.519998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.928 [2024-12-16 12:55:59.520274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.520984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.520990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.521002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.521009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.521021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.521028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.521040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.521046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.521060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.521066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.928 [2024-12-16 12:55:59.521078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.928 [2024-12-16 12:55:59.521085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.521854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.521983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.521989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.522001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.522008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.522020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.929 [2024-12-16 12:55:59.522027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.522039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.522047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.522060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.522067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.929 [2024-12-16 12:55:59.522079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.929 [2024-12-16 12:55:59.522086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.930 [2024-12-16 12:55:59.522897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.522987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.522994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.523251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.523261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.930 [2024-12-16 12:55:59.523274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.930 [2024-12-16 12:55:59.523281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.523754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.523878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.931 [2024-12-16 12:55:59.523885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.931 [2024-12-16 12:55:59.524258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.931 [2024-12-16 12:55:59.524270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.524480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.524492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.529993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.932 [2024-12-16 12:55:59.529999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.530011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.932 [2024-12-16 12:55:59.530018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.530030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.932 [2024-12-16 12:55:59.530037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.530049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.932 [2024-12-16 12:55:59.530055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.932 [2024-12-16 12:55:59.530067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.933 [2024-12-16 12:55:59.530894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.530987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.530999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.933 [2024-12-16 12:55:59.531212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.933 [2024-12-16 12:55:59.531218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.934 [2024-12-16 12:55:59.531350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.934 [2024-12-16 12:55:59.531952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.934 [2024-12-16 12:55:59.531964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.934 [2024-12-16 12:55:59.531971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.531983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.935 [2024-12-16 12:55:59.531990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.935 [2024-12-16 12:55:59.532008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.935 [2024-12-16 12:55:59.532027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.935 [2024-12-16 12:55:59.532047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.935 [2024-12-16 12:55:59.532066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.935 [2024-12-16 12:55:59.532582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.532982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.532994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.935 [2024-12-16 12:55:59.533214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.935 [2024-12-16 12:55:59.533226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.533587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.533636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.533642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.936 [2024-12-16 12:55:59.534343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.936 [2024-12-16 12:55:59.534511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.936 [2024-12-16 12:55:59.534518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.937 [2024-12-16 12:55:59.534868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.534991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.534998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.535260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.535268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.540178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.540190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.937 [2024-12-16 12:55:59.540205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.937 [2024-12-16 12:55:59.540212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.540444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.540451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.938 [2024-12-16 12:55:59.541199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.938 [2024-12-16 12:55:59.541613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.938 [2024-12-16 12:55:59.541627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.541979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.541993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.939 [2024-12-16 12:55:59.542350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.542985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.542998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.543013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.543021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.543035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.543043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.543058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.939 [2024-12-16 12:55:59.543065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.939 [2024-12-16 12:55:59.543079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.940 [2024-12-16 12:55:59.543720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.940 [2024-12-16 12:55:59.543929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.940 [2024-12-16 12:55:59.543936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.543950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.543958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.543972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.543980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.543993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.544388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.544397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.941 [2024-12-16 12:55:59.545274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.941 [2024-12-16 12:55:59.545555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.941 [2024-12-16 12:55:59.545564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.545962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.545983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.942 [2024-12-16 12:55:59.546706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.942 [2024-12-16 12:55:59.546732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.942 [2024-12-16 12:55:59.546749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.546944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.546961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.547975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.547983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.943 [2024-12-16 12:55:59.548009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.943 [2024-12-16 12:55:59.548545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.943 [2024-12-16 12:55:59.548554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.944 [2024-12-16 12:55:59.548667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.548975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.548992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.549453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.549463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.550130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.550159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.944 [2024-12-16 12:55:59.550187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.550217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.550244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.550270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.944 [2024-12-16 12:55:59.550287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.944 [2024-12-16 12:55:59.550297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.945 [2024-12-16 12:55:59.550323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.945 [2024-12-16 12:55:59.550349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.945 [2024-12-16 12:55:59.550375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.550974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.550984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.945 [2024-12-16 12:55:59.551334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.945 [2024-12-16 12:55:59.551351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.551544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.551770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.551779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.946 [2024-12-16 12:55:59.552838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.552866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.552895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.552923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.552949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.552976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.552993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.553002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.946 [2024-12-16 12:55:59.553028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.946 [2024-12-16 12:55:59.553045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.947 [2024-12-16 12:55:59.553487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.553559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.553568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.947 [2024-12-16 12:55:59.558833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.947 [2024-12-16 12:55:59.558840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.558983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.558990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.948 [2024-12-16 12:55:59.559631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.559982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.559994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.560002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.560015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.560022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.560034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.560041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.560054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.560061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.560074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.560081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.948 [2024-12-16 12:55:59.560094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.948 [2024-12-16 12:55:59.560101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.560484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.560629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.560635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.949 [2024-12-16 12:55:59.561145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.949 [2024-12-16 12:55:59.561347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.949 [2024-12-16 12:55:59.561353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.950 [2024-12-16 12:55:59.561373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.950 [2024-12-16 12:55:59.561827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.561988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.561995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.562007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.562013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.562027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.562033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.562045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.562052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.562064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.562070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.950 [2024-12-16 12:55:59.562083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.950 [2024-12-16 12:55:59.562091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.562929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.563007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.563027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.563046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.951 [2024-12-16 12:55:59.563066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.951 [2024-12-16 12:55:59.563276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.951 [2024-12-16 12:55:59.563282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.952 [2024-12-16 12:55:59.563901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.563921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.563940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.563958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.563977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.563989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.568876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.568886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.952 [2024-12-16 12:55:59.568901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.952 [2024-12-16 12:55:59.568908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.569734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.569991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.569998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.953 [2024-12-16 12:55:59.570233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.953 [2024-12-16 12:55:59.570266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.953 [2024-12-16 12:55:59.570275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.570822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.570830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.954 [2024-12-16 12:55:59.571419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.954 [2024-12-16 12:55:59.571563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.954 [2024-12-16 12:55:59.571583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.954 [2024-12-16 12:55:59.571603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.954 [2024-12-16 12:55:59.571617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.571995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.955 [2024-12-16 12:55:59.572349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.955 [2024-12-16 12:55:59.572356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.572843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.572988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.572995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.573020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.573088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.573121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.573147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.573174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.956 [2024-12-16 12:55:59.573199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.956 [2024-12-16 12:55:59.573512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.956 [2024-12-16 12:55:59.573519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.957 [2024-12-16 12:55:59.573792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.573986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.573993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.957 [2024-12-16 12:55:59.574509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.957 [2024-12-16 12:55:59.574516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.958 11359.31 IOPS, 44.37 MiB/s [2024-12-16T11:56:16.025Z] 10547.93 IOPS, 41.20 MiB/s [2024-12-16T11:56:16.025Z] 9844.73 IOPS, 38.46 MiB/s [2024-12-16T11:56:16.025Z] 9407.56 IOPS, 36.75 MiB/s [2024-12-16T11:56:16.025Z] 9529.65 IOPS, 37.23 MiB/s [2024-12-16T11:56:16.025Z] 9645.72 IOPS, 37.68 MiB/s [2024-12-16T11:56:16.025Z] 9829.05 IOPS, 38.39 MiB/s [2024-12-16T11:56:16.025Z] 10014.95 IOPS, 39.12 MiB/s [2024-12-16T11:56:16.025Z] 10169.67 IOPS, 39.73 MiB/s [2024-12-16T11:56:16.025Z] 10227.41 IOPS, 39.95 MiB/s [2024-12-16T11:56:16.025Z] 10285.09 IOPS, 40.18 MiB/s [2024-12-16T11:56:16.025Z] 10353.79 IOPS, 40.44 MiB/s [2024-12-16T11:56:16.025Z] 10475.36 IOPS, 40.92 MiB/s [2024-12-16T11:56:16.025Z] 10586.81 IOPS, 41.35 MiB/s [2024-12-16T11:56:16.025Z] [2024-12-16 12:56:13.229320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.958 [2024-12-16 12:56:13.229360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.958 [2024-12-16 12:56:13.229415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.958 [2024-12-16 12:56:13.229717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.958 [2024-12-16 12:56:13.229736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.229750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.229757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.958 [2024-12-16 12:56:13.230905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.958 [2024-12-16 12:56:13.230912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.230924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.230931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.230944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.230950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.230963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.230969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.230981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.230988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.231007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.231506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.231525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.231543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.231563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.959 [2024-12-16 12:56:13.231641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.959 [2024-12-16 12:56:13.231842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.959 [2024-12-16 12:56:13.231849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.959 10647.26 IOPS, 41.59 MiB/s [2024-12-16T11:56:16.026Z] 10677.14 IOPS, 41.71 MiB/s [2024-12-16T11:56:16.026Z] Received shutdown signal, test time was about 28.775192 seconds 00:34:49.959 00:34:49.959 Latency(us) 00:34:49.959 [2024-12-16T11:56:16.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.959 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:49.959 Verification LBA range: start 0x0 length 0x4000 00:34:49.959 Nvme0n1 : 28.77 10672.83 41.69 0.00 0.00 11954.61 565.64 3067833.78 00:34:49.959 [2024-12-16T11:56:16.026Z] =================================================================================================================== 00:34:49.959 [2024-12-16T11:56:16.026Z] Total : 10672.83 41.69 0.00 0.00 11954.61 565.64 3067833.78 00:34:49.960 [2024-12-16 12:56:15.624102] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:34:49.960 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.219 12:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.219 rmmod nvme_tcp 00:34:50.219 rmmod nvme_fabrics 00:34:50.219 rmmod nvme_keyring 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 535682 ']' 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 535682 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 535682 ']' 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 535682 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 535682 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 535682' 00:34:50.219 killing process with pid 535682 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 535682 00:34:50.219 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 535682 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.478 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.479 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.479 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.479 12:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:52.385 00:34:52.385 real 0m40.350s 00:34:52.385 user 1m49.245s 00:34:52.385 sys 0m11.609s 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:52.385 ************************************ 00:34:52.385 END TEST nvmf_host_multipath_status 00:34:52.385 ************************************ 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.385 ************************************ 00:34:52.385 START TEST nvmf_discovery_remove_ifc 00:34:52.385 ************************************ 00:34:52.385 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:52.645 * Looking for test storage... 00:34:52.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.645 --rc genhtml_branch_coverage=1 00:34:52.645 --rc genhtml_function_coverage=1 00:34:52.645 --rc genhtml_legend=1 00:34:52.645 --rc geninfo_all_blocks=1 00:34:52.645 --rc geninfo_unexecuted_blocks=1 00:34:52.645 00:34:52.645 ' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.645 --rc genhtml_branch_coverage=1 00:34:52.645 --rc genhtml_function_coverage=1 00:34:52.645 --rc genhtml_legend=1 00:34:52.645 --rc geninfo_all_blocks=1 00:34:52.645 --rc geninfo_unexecuted_blocks=1 00:34:52.645 00:34:52.645 ' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.645 --rc genhtml_branch_coverage=1 00:34:52.645 --rc genhtml_function_coverage=1 00:34:52.645 --rc genhtml_legend=1 00:34:52.645 --rc geninfo_all_blocks=1 00:34:52.645 --rc geninfo_unexecuted_blocks=1 00:34:52.645 00:34:52.645 ' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.645 --rc genhtml_branch_coverage=1 00:34:52.645 --rc genhtml_function_coverage=1 00:34:52.645 --rc genhtml_legend=1 00:34:52.645 --rc geninfo_all_blocks=1 00:34:52.645 --rc geninfo_unexecuted_blocks=1 00:34:52.645 00:34:52.645 ' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.645 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:52.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:52.646 12:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.353 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:59.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:59.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:59.354 Found net devices under 0000:af:00.0: cvl_0_0 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:59.354 Found net devices under 0000:af:00.1: cvl_0_1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:34:59.354 00:34:59.354 --- 10.0.0.2 ping statistics --- 00:34:59.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.354 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:34:59.354 00:34:59.354 --- 10.0.0.1 ping statistics --- 00:34:59.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.354 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=544272 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 544272 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 544272 ']' 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.354 [2024-12-16 12:56:24.654726] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.354 [2024-12-16 12:56:24.654765] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.354 [2024-12-16 12:56:24.709985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.354 [2024-12-16 12:56:24.748652] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.354 [2024-12-16 12:56:24.748691] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.354 [2024-12-16 12:56:24.748701] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.354 [2024-12-16 12:56:24.748709] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.354 [2024-12-16 12:56:24.748715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.354 [2024-12-16 12:56:24.748735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.354 [2024-12-16 12:56:24.893499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.354 [2024-12-16 12:56:24.901678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:59.354 null0 00:34:59.354 [2024-12-16 12:56:24.933651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=544299 00:34:59.354 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 544299 /tmp/host.sock 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 544299 ']' 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:59.355 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:59.355 12:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.355 [2024-12-16 12:56:25.000257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.355 [2024-12-16 12:56:25.000302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544299 ] 00:34:59.355 [2024-12-16 12:56:25.068576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.355 [2024-12-16 12:56:25.109971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.355 12:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.365 [2024-12-16 12:56:26.280270] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:00.365 [2024-12-16 12:56:26.280297] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:00.365 [2024-12-16 12:56:26.280313] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:00.365 [2024-12-16 12:56:26.368568] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:00.653 [2024-12-16 12:56:26.596772] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:00.653 [2024-12-16 12:56:26.596820] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:00.653 [2024-12-16 12:56:26.596840] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:00.653 [2024-12-16 12:56:26.596853] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:00.653 [2024-12-16 12:56:26.596874] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:00.653 [2024-12-16 12:56:26.599175] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x941450 was disconnected and freed. delete nvme_qpair. 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:00.653 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:00.945 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:00.946 12:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:01.999 12:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.936 12:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.873 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.134 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.134 12:56:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.071 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.072 12:56:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.008 [2024-12-16 12:56:32.038436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:06.008 [2024-12-16 12:56:32.038479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.008 [2024-12-16 12:56:32.038490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.008 [2024-12-16 12:56:32.038499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.008 [2024-12-16 12:56:32.038506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.008 [2024-12-16 12:56:32.038513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.008 [2024-12-16 12:56:32.038520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.008 [2024-12-16 12:56:32.038527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.008 [2024-12-16 12:56:32.038534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.008 [2024-12-16 12:56:32.038541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.008 [2024-12-16 12:56:32.038547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.008 [2024-12-16 12:56:32.038554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91dcf0 is same with the state(6) to be set 00:35:06.008 [2024-12-16 12:56:32.048458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91dcf0 (9): Bad file descriptor 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:06.008 12:56:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.008 [2024-12-16 12:56:32.058495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.387 [2024-12-16 12:56:33.117145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:07.387 [2024-12-16 12:56:33.117221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91dcf0 with addr=10.0.0.2, port=4420 00:35:07.387 [2024-12-16 12:56:33.117264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91dcf0 is same with the state(6) to be set 00:35:07.387 [2024-12-16 12:56:33.117315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91dcf0 (9): Bad file descriptor 00:35:07.387 [2024-12-16 12:56:33.118268] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:07.387 [2024-12-16 12:56:33.118328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:07.387 [2024-12-16 12:56:33.118352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:07.387 [2024-12-16 12:56:33.118375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:07.387 [2024-12-16 12:56:33.118434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:07.387 [2024-12-16 12:56:33.118459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:07.387 12:56:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.324 [2024-12-16 12:56:34.120950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:08.324 [2024-12-16 12:56:34.120970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:08.324 [2024-12-16 12:56:34.120978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:08.325 [2024-12-16 12:56:34.120985] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:08.325 [2024-12-16 12:56:34.120996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:08.325 [2024-12-16 12:56:34.121013] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:08.325 [2024-12-16 12:56:34.121034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.325 [2024-12-16 12:56:34.121044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.325 [2024-12-16 12:56:34.121052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.325 [2024-12-16 12:56:34.121059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.325 [2024-12-16 12:56:34.121066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.325 [2024-12-16 12:56:34.121072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.325 [2024-12-16 12:56:34.121079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.325 [2024-12-16 12:56:34.121085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.325 [2024-12-16 12:56:34.121092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.325 [2024-12-16 12:56:34.121098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.325 [2024-12-16 12:56:34.121105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:08.325 [2024-12-16 12:56:34.121560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90d400 (9): Bad file descriptor 00:35:08.325 [2024-12-16 12:56:34.122571] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:08.325 [2024-12-16 12:56:34.122581] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:08.325 12:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:09.261 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.519 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:09.519 12:56:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:10.087 [2024-12-16 12:56:36.135162] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:10.087 [2024-12-16 12:56:36.135181] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:10.087 [2024-12-16 12:56:36.135194] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:10.346 [2024-12-16 12:56:36.262594] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.346 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.604 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:10.604 12:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:10.604 [2024-12-16 12:56:36.447011] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:10.604 [2024-12-16 12:56:36.447046] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:10.604 [2024-12-16 12:56:36.447063] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:10.604 [2024-12-16 12:56:36.447076] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:10.604 [2024-12-16 12:56:36.447083] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:10.604 [2024-12-16 12:56:36.454370] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x927ea0 was disconnected and freed. delete nvme_qpair. 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 544299 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 544299 ']' 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 544299 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544299 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544299' 00:35:11.542 killing process with pid 544299 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 544299 00:35:11.542 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 544299 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.801 rmmod nvme_tcp 00:35:11.801 rmmod nvme_fabrics 00:35:11.801 rmmod nvme_keyring 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 544272 ']' 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 544272 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 544272 ']' 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 544272 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544272 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544272' 00:35:11.801 killing process with pid 544272 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 544272 00:35:11.801 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 544272 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:12.061 12:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:35:12.061 12:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.061 12:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.061 12:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.061 12:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.061 12:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:14.597 00:35:14.597 real 0m21.657s 00:35:14.597 user 0m26.912s 00:35:14.597 sys 0m5.857s 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.597 ************************************ 00:35:14.597 END TEST nvmf_discovery_remove_ifc 00:35:14.597 ************************************ 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.597 ************************************ 00:35:14.597 START TEST nvmf_identify_kernel_target 00:35:14.597 ************************************ 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:14.597 * Looking for test storage... 00:35:14.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.597 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:14.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.598 --rc genhtml_branch_coverage=1 00:35:14.598 --rc genhtml_function_coverage=1 00:35:14.598 --rc genhtml_legend=1 00:35:14.598 --rc geninfo_all_blocks=1 00:35:14.598 --rc geninfo_unexecuted_blocks=1 00:35:14.598 00:35:14.598 ' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:14.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.598 --rc genhtml_branch_coverage=1 00:35:14.598 --rc genhtml_function_coverage=1 00:35:14.598 --rc genhtml_legend=1 00:35:14.598 --rc geninfo_all_blocks=1 00:35:14.598 --rc geninfo_unexecuted_blocks=1 00:35:14.598 00:35:14.598 ' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:14.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.598 --rc genhtml_branch_coverage=1 00:35:14.598 --rc genhtml_function_coverage=1 00:35:14.598 --rc genhtml_legend=1 00:35:14.598 --rc geninfo_all_blocks=1 00:35:14.598 --rc geninfo_unexecuted_blocks=1 00:35:14.598 00:35:14.598 ' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:14.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.598 --rc genhtml_branch_coverage=1 00:35:14.598 --rc genhtml_function_coverage=1 00:35:14.598 --rc genhtml_legend=1 00:35:14.598 --rc geninfo_all_blocks=1 00:35:14.598 --rc geninfo_unexecuted_blocks=1 00:35:14.598 00:35:14.598 ' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:14.598 12:56:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:19.874 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:19.874 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:19.874 Found net devices under 0000:af:00.0: cvl_0_0 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:19.874 Found net devices under 0000:af:00.1: cvl_0_1 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:35:19.874 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.875 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.134 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.134 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.134 12:56:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:20.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:20.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:35:20.134 00:35:20.134 --- 10.0.0.2 ping statistics --- 00:35:20.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.134 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:20.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:20.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:35:20.134 00:35:20.134 --- 10.0.0.1 ping statistics --- 00:35:20.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.134 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:20.134 12:56:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:22.671 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:22.930 Waiting for block devices as requested 00:35:23.189 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:23.189 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:23.449 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:23.449 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:23.449 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:23.449 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.708 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:23.708 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.708 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.967 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:23.967 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:23.967 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:23.967 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:24.225 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:24.225 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:24.225 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:24.485 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:24.485 No valid GPT data, bailing 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:35:24.485 No valid GPT data, bailing 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n2 ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n2 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ host-managed != none ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # continue 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:35:24.485 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:24.745 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:24.745 00:35:24.745 Discovery Log Number of Records 2, Generation counter 2 00:35:24.745 =====Discovery Log Entry 0====== 00:35:24.745 trtype: tcp 00:35:24.745 adrfam: ipv4 00:35:24.745 subtype: current discovery subsystem 00:35:24.745 treq: not specified, sq flow control disable supported 00:35:24.745 portid: 1 00:35:24.745 trsvcid: 4420 00:35:24.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:24.745 traddr: 10.0.0.1 00:35:24.745 eflags: none 00:35:24.745 sectype: none 00:35:24.745 =====Discovery Log Entry 1====== 00:35:24.745 trtype: tcp 00:35:24.745 adrfam: ipv4 00:35:24.745 subtype: nvme subsystem 00:35:24.745 treq: not specified, sq flow control disable supported 00:35:24.745 portid: 1 00:35:24.745 trsvcid: 4420 00:35:24.745 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:24.745 traddr: 10.0.0.1 00:35:24.745 eflags: none 00:35:24.745 sectype: none 00:35:24.745 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:24.745 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:24.745 ===================================================== 00:35:24.745 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:24.745 ===================================================== 00:35:24.745 Controller Capabilities/Features 00:35:24.745 ================================ 00:35:24.745 Vendor ID: 0000 00:35:24.745 Subsystem Vendor ID: 0000 00:35:24.745 Serial Number: 8bc4e626ac0e25fb375d 00:35:24.745 Model Number: Linux 00:35:24.745 Firmware Version: 6.8.9-20 00:35:24.745 Recommended Arb Burst: 0 00:35:24.745 IEEE OUI Identifier: 00 00 00 00:35:24.745 Multi-path I/O 00:35:24.745 May have multiple subsystem ports: No 00:35:24.745 May have multiple controllers: No 00:35:24.745 Associated with SR-IOV VF: No 00:35:24.745 Max Data Transfer Size: Unlimited 00:35:24.745 Max Number of Namespaces: 0 00:35:24.745 Max Number of I/O Queues: 1024 00:35:24.745 NVMe Specification Version (VS): 1.3 00:35:24.745 NVMe Specification Version (Identify): 1.3 00:35:24.745 Maximum Queue Entries: 1024 00:35:24.745 Contiguous Queues Required: No 00:35:24.745 Arbitration Mechanisms Supported 00:35:24.745 Weighted Round Robin: Not Supported 00:35:24.745 Vendor Specific: Not Supported 00:35:24.745 Reset Timeout: 7500 ms 00:35:24.745 Doorbell Stride: 4 bytes 00:35:24.746 NVM Subsystem Reset: Not Supported 00:35:24.746 Command Sets Supported 00:35:24.746 NVM Command Set: Supported 00:35:24.746 Boot Partition: Not Supported 00:35:24.746 Memory Page Size Minimum: 4096 bytes 00:35:24.746 Memory Page Size Maximum: 4096 bytes 00:35:24.746 Persistent Memory Region: Not Supported 00:35:24.746 Optional Asynchronous Events Supported 00:35:24.746 Namespace Attribute Notices: Not Supported 00:35:24.746 Firmware Activation Notices: Not Supported 00:35:24.746 ANA Change Notices: Not Supported 00:35:24.746 PLE Aggregate Log Change Notices: Not Supported 00:35:24.746 LBA Status Info Alert Notices: Not Supported 00:35:24.746 EGE Aggregate Log Change Notices: Not Supported 00:35:24.746 Normal NVM Subsystem Shutdown event: Not Supported 00:35:24.746 Zone Descriptor Change Notices: Not Supported 00:35:24.746 Discovery Log Change Notices: Supported 00:35:24.746 Controller Attributes 00:35:24.746 128-bit Host Identifier: Not Supported 00:35:24.746 Non-Operational Permissive Mode: Not Supported 00:35:24.746 NVM Sets: Not Supported 00:35:24.746 Read Recovery Levels: Not Supported 00:35:24.746 Endurance Groups: Not Supported 00:35:24.746 Predictable Latency Mode: Not Supported 00:35:24.746 Traffic Based Keep ALive: Not Supported 00:35:24.746 Namespace Granularity: Not Supported 00:35:24.746 SQ Associations: Not Supported 00:35:24.746 UUID List: Not Supported 00:35:24.746 Multi-Domain Subsystem: Not Supported 00:35:24.746 Fixed Capacity Management: Not Supported 00:35:24.746 Variable Capacity Management: Not Supported 00:35:24.746 Delete Endurance Group: Not Supported 00:35:24.746 Delete NVM Set: Not Supported 00:35:24.746 Extended LBA Formats Supported: Not Supported 00:35:24.746 Flexible Data Placement Supported: Not Supported 00:35:24.746 00:35:24.746 Controller Memory Buffer Support 00:35:24.746 ================================ 00:35:24.746 Supported: No 00:35:24.746 00:35:24.746 Persistent Memory Region Support 00:35:24.746 ================================ 00:35:24.746 Supported: No 00:35:24.746 00:35:24.746 Admin Command Set Attributes 00:35:24.746 ============================ 00:35:24.746 Security Send/Receive: Not Supported 00:35:24.746 Format NVM: Not Supported 00:35:24.746 Firmware Activate/Download: Not Supported 00:35:24.746 Namespace Management: Not Supported 00:35:24.746 Device Self-Test: Not Supported 00:35:24.746 Directives: Not Supported 00:35:24.746 NVMe-MI: Not Supported 00:35:24.746 Virtualization Management: Not Supported 00:35:24.746 Doorbell Buffer Config: Not Supported 00:35:24.746 Get LBA Status Capability: Not Supported 00:35:24.746 Command & Feature Lockdown Capability: Not Supported 00:35:24.746 Abort Command Limit: 1 00:35:24.746 Async Event Request Limit: 1 00:35:24.746 Number of Firmware Slots: N/A 00:35:24.746 Firmware Slot 1 Read-Only: N/A 00:35:24.746 Firmware Activation Without Reset: N/A 00:35:24.746 Multiple Update Detection Support: N/A 00:35:24.746 Firmware Update Granularity: No Information Provided 00:35:24.746 Per-Namespace SMART Log: No 00:35:24.746 Asymmetric Namespace Access Log Page: Not Supported 00:35:24.746 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:24.746 Command Effects Log Page: Not Supported 00:35:24.746 Get Log Page Extended Data: Supported 00:35:24.746 Telemetry Log Pages: Not Supported 00:35:24.746 Persistent Event Log Pages: Not Supported 00:35:24.746 Supported Log Pages Log Page: May Support 00:35:24.746 Commands Supported & Effects Log Page: Not Supported 00:35:24.746 Feature Identifiers & Effects Log Page:May Support 00:35:24.746 NVMe-MI Commands & Effects Log Page: May Support 00:35:24.746 Data Area 4 for Telemetry Log: Not Supported 00:35:24.746 Error Log Page Entries Supported: 1 00:35:24.746 Keep Alive: Not Supported 00:35:24.746 00:35:24.746 NVM Command Set Attributes 00:35:24.746 ========================== 00:35:24.746 Submission Queue Entry Size 00:35:24.746 Max: 1 00:35:24.746 Min: 1 00:35:24.746 Completion Queue Entry Size 00:35:24.746 Max: 1 00:35:24.746 Min: 1 00:35:24.746 Number of Namespaces: 0 00:35:24.746 Compare Command: Not Supported 00:35:24.746 Write Uncorrectable Command: Not Supported 00:35:24.746 Dataset Management Command: Not Supported 00:35:24.746 Write Zeroes Command: Not Supported 00:35:24.746 Set Features Save Field: Not Supported 00:35:24.746 Reservations: Not Supported 00:35:24.746 Timestamp: Not Supported 00:35:24.746 Copy: Not Supported 00:35:24.746 Volatile Write Cache: Not Present 00:35:24.746 Atomic Write Unit (Normal): 1 00:35:24.746 Atomic Write Unit (PFail): 1 00:35:24.746 Atomic Compare & Write Unit: 1 00:35:24.746 Fused Compare & Write: Not Supported 00:35:24.746 Scatter-Gather List 00:35:24.746 SGL Command Set: Supported 00:35:24.746 SGL Keyed: Not Supported 00:35:24.746 SGL Bit Bucket Descriptor: Not Supported 00:35:24.746 SGL Metadata Pointer: Not Supported 00:35:24.746 Oversized SGL: Not Supported 00:35:24.746 SGL Metadata Address: Not Supported 00:35:24.746 SGL Offset: Supported 00:35:24.746 Transport SGL Data Block: Not Supported 00:35:24.746 Replay Protected Memory Block: Not Supported 00:35:24.746 00:35:24.746 Firmware Slot Information 00:35:24.746 ========================= 00:35:24.746 Active slot: 0 00:35:24.746 00:35:24.746 00:35:24.746 Error Log 00:35:24.746 ========= 00:35:24.746 00:35:24.746 Active Namespaces 00:35:24.746 ================= 00:35:24.746 Discovery Log Page 00:35:24.746 ================== 00:35:24.746 Generation Counter: 2 00:35:24.746 Number of Records: 2 00:35:24.746 Record Format: 0 00:35:24.746 00:35:24.746 Discovery Log Entry 0 00:35:24.746 ---------------------- 00:35:24.746 Transport Type: 3 (TCP) 00:35:24.746 Address Family: 1 (IPv4) 00:35:24.746 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:24.746 Entry Flags: 00:35:24.746 Duplicate Returned Information: 0 00:35:24.746 Explicit Persistent Connection Support for Discovery: 0 00:35:24.746 Transport Requirements: 00:35:24.746 Secure Channel: Not Specified 00:35:24.746 Port ID: 1 (0x0001) 00:35:24.746 Controller ID: 65535 (0xffff) 00:35:24.746 Admin Max SQ Size: 32 00:35:24.746 Transport Service Identifier: 4420 00:35:24.746 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:24.746 Transport Address: 10.0.0.1 00:35:24.746 Discovery Log Entry 1 00:35:24.746 ---------------------- 00:35:24.746 Transport Type: 3 (TCP) 00:35:24.746 Address Family: 1 (IPv4) 00:35:24.746 Subsystem Type: 2 (NVM Subsystem) 00:35:24.746 Entry Flags: 00:35:24.746 Duplicate Returned Information: 0 00:35:24.746 Explicit Persistent Connection Support for Discovery: 0 00:35:24.746 Transport Requirements: 00:35:24.746 Secure Channel: Not Specified 00:35:24.746 Port ID: 1 (0x0001) 00:35:24.746 Controller ID: 65535 (0xffff) 00:35:24.746 Admin Max SQ Size: 32 00:35:24.746 Transport Service Identifier: 4420 00:35:24.746 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:24.746 Transport Address: 10.0.0.1 00:35:24.746 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.746 get_feature(0x01) failed 00:35:24.746 get_feature(0x02) failed 00:35:24.746 get_feature(0x04) failed 00:35:24.746 ===================================================== 00:35:24.746 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:24.746 ===================================================== 00:35:24.746 Controller Capabilities/Features 00:35:24.746 ================================ 00:35:24.746 Vendor ID: 0000 00:35:24.746 Subsystem Vendor ID: 0000 00:35:24.746 Serial Number: 592bc810b16c37457e12 00:35:24.746 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:24.746 Firmware Version: 6.8.9-20 00:35:24.746 Recommended Arb Burst: 6 00:35:24.746 IEEE OUI Identifier: 00 00 00 00:35:24.746 Multi-path I/O 00:35:24.746 May have multiple subsystem ports: Yes 00:35:24.746 May have multiple controllers: Yes 00:35:24.746 Associated with SR-IOV VF: No 00:35:24.746 Max Data Transfer Size: Unlimited 00:35:24.746 Max Number of Namespaces: 1024 00:35:24.746 Max Number of I/O Queues: 128 00:35:24.746 NVMe Specification Version (VS): 1.3 00:35:24.746 NVMe Specification Version (Identify): 1.3 00:35:24.746 Maximum Queue Entries: 1024 00:35:24.746 Contiguous Queues Required: No 00:35:24.746 Arbitration Mechanisms Supported 00:35:24.746 Weighted Round Robin: Not Supported 00:35:24.746 Vendor Specific: Not Supported 00:35:24.746 Reset Timeout: 7500 ms 00:35:24.746 Doorbell Stride: 4 bytes 00:35:24.746 NVM Subsystem Reset: Not Supported 00:35:24.746 Command Sets Supported 00:35:24.746 NVM Command Set: Supported 00:35:24.746 Boot Partition: Not Supported 00:35:24.746 Memory Page Size Minimum: 4096 bytes 00:35:24.746 Memory Page Size Maximum: 4096 bytes 00:35:24.746 Persistent Memory Region: Not Supported 00:35:24.746 Optional Asynchronous Events Supported 00:35:24.746 Namespace Attribute Notices: Supported 00:35:24.746 Firmware Activation Notices: Not Supported 00:35:24.746 ANA Change Notices: Supported 00:35:24.747 PLE Aggregate Log Change Notices: Not Supported 00:35:24.747 LBA Status Info Alert Notices: Not Supported 00:35:24.747 EGE Aggregate Log Change Notices: Not Supported 00:35:24.747 Normal NVM Subsystem Shutdown event: Not Supported 00:35:24.747 Zone Descriptor Change Notices: Not Supported 00:35:24.747 Discovery Log Change Notices: Not Supported 00:35:24.747 Controller Attributes 00:35:24.747 128-bit Host Identifier: Supported 00:35:24.747 Non-Operational Permissive Mode: Not Supported 00:35:24.747 NVM Sets: Not Supported 00:35:24.747 Read Recovery Levels: Not Supported 00:35:24.747 Endurance Groups: Not Supported 00:35:24.747 Predictable Latency Mode: Not Supported 00:35:24.747 Traffic Based Keep ALive: Supported 00:35:24.747 Namespace Granularity: Not Supported 00:35:24.747 SQ Associations: Not Supported 00:35:24.747 UUID List: Not Supported 00:35:24.747 Multi-Domain Subsystem: Not Supported 00:35:24.747 Fixed Capacity Management: Not Supported 00:35:24.747 Variable Capacity Management: Not Supported 00:35:24.747 Delete Endurance Group: Not Supported 00:35:24.747 Delete NVM Set: Not Supported 00:35:24.747 Extended LBA Formats Supported: Not Supported 00:35:24.747 Flexible Data Placement Supported: Not Supported 00:35:24.747 00:35:24.747 Controller Memory Buffer Support 00:35:24.747 ================================ 00:35:24.747 Supported: No 00:35:24.747 00:35:24.747 Persistent Memory Region Support 00:35:24.747 ================================ 00:35:24.747 Supported: No 00:35:24.747 00:35:24.747 Admin Command Set Attributes 00:35:24.747 ============================ 00:35:24.747 Security Send/Receive: Not Supported 00:35:24.747 Format NVM: Not Supported 00:35:24.747 Firmware Activate/Download: Not Supported 00:35:24.747 Namespace Management: Not Supported 00:35:24.747 Device Self-Test: Not Supported 00:35:24.747 Directives: Not Supported 00:35:24.747 NVMe-MI: Not Supported 00:35:24.747 Virtualization Management: Not Supported 00:35:24.747 Doorbell Buffer Config: Not Supported 00:35:24.747 Get LBA Status Capability: Not Supported 00:35:24.747 Command & Feature Lockdown Capability: Not Supported 00:35:24.747 Abort Command Limit: 4 00:35:24.747 Async Event Request Limit: 4 00:35:24.747 Number of Firmware Slots: N/A 00:35:24.747 Firmware Slot 1 Read-Only: N/A 00:35:24.747 Firmware Activation Without Reset: N/A 00:35:24.747 Multiple Update Detection Support: N/A 00:35:24.747 Firmware Update Granularity: No Information Provided 00:35:24.747 Per-Namespace SMART Log: Yes 00:35:24.747 Asymmetric Namespace Access Log Page: Supported 00:35:24.747 ANA Transition Time : 10 sec 00:35:24.747 00:35:24.747 Asymmetric Namespace Access Capabilities 00:35:24.747 ANA Optimized State : Supported 00:35:24.747 ANA Non-Optimized State : Supported 00:35:24.747 ANA Inaccessible State : Supported 00:35:24.747 ANA Persistent Loss State : Supported 00:35:24.747 ANA Change State : Supported 00:35:24.747 ANAGRPID is not changed : No 00:35:24.747 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:24.747 00:35:24.747 ANA Group Identifier Maximum : 128 00:35:24.747 Number of ANA Group Identifiers : 128 00:35:24.747 Max Number of Allowed Namespaces : 1024 00:35:24.747 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:24.747 Command Effects Log Page: Supported 00:35:24.747 Get Log Page Extended Data: Supported 00:35:24.747 Telemetry Log Pages: Not Supported 00:35:24.747 Persistent Event Log Pages: Not Supported 00:35:24.747 Supported Log Pages Log Page: May Support 00:35:24.747 Commands Supported & Effects Log Page: Not Supported 00:35:24.747 Feature Identifiers & Effects Log Page:May Support 00:35:24.747 NVMe-MI Commands & Effects Log Page: May Support 00:35:24.747 Data Area 4 for Telemetry Log: Not Supported 00:35:24.747 Error Log Page Entries Supported: 128 00:35:24.747 Keep Alive: Supported 00:35:24.747 Keep Alive Granularity: 1000 ms 00:35:24.747 00:35:24.747 NVM Command Set Attributes 00:35:24.747 ========================== 00:35:24.747 Submission Queue Entry Size 00:35:24.747 Max: 64 00:35:24.747 Min: 64 00:35:24.747 Completion Queue Entry Size 00:35:24.747 Max: 16 00:35:24.747 Min: 16 00:35:24.747 Number of Namespaces: 1024 00:35:24.747 Compare Command: Not Supported 00:35:24.747 Write Uncorrectable Command: Not Supported 00:35:24.747 Dataset Management Command: Supported 00:35:24.747 Write Zeroes Command: Supported 00:35:24.747 Set Features Save Field: Not Supported 00:35:24.747 Reservations: Not Supported 00:35:24.747 Timestamp: Not Supported 00:35:24.747 Copy: Not Supported 00:35:24.747 Volatile Write Cache: Present 00:35:24.747 Atomic Write Unit (Normal): 1 00:35:24.747 Atomic Write Unit (PFail): 1 00:35:24.747 Atomic Compare & Write Unit: 1 00:35:24.747 Fused Compare & Write: Not Supported 00:35:24.747 Scatter-Gather List 00:35:24.747 SGL Command Set: Supported 00:35:24.747 SGL Keyed: Not Supported 00:35:24.747 SGL Bit Bucket Descriptor: Not Supported 00:35:24.747 SGL Metadata Pointer: Not Supported 00:35:24.747 Oversized SGL: Not Supported 00:35:24.747 SGL Metadata Address: Not Supported 00:35:24.747 SGL Offset: Supported 00:35:24.747 Transport SGL Data Block: Not Supported 00:35:24.747 Replay Protected Memory Block: Not Supported 00:35:24.747 00:35:24.747 Firmware Slot Information 00:35:24.747 ========================= 00:35:24.747 Active slot: 0 00:35:24.747 00:35:24.747 Asymmetric Namespace Access 00:35:24.747 =========================== 00:35:24.747 Change Count : 0 00:35:24.747 Number of ANA Group Descriptors : 1 00:35:24.747 ANA Group Descriptor : 0 00:35:24.747 ANA Group ID : 1 00:35:24.747 Number of NSID Values : 1 00:35:24.747 Change Count : 0 00:35:24.747 ANA State : 1 00:35:24.747 Namespace Identifier : 1 00:35:24.747 00:35:24.747 Commands Supported and Effects 00:35:24.747 ============================== 00:35:24.747 Admin Commands 00:35:24.747 -------------- 00:35:24.747 Get Log Page (02h): Supported 00:35:24.747 Identify (06h): Supported 00:35:24.747 Abort (08h): Supported 00:35:24.747 Set Features (09h): Supported 00:35:24.747 Get Features (0Ah): Supported 00:35:24.747 Asynchronous Event Request (0Ch): Supported 00:35:24.747 Keep Alive (18h): Supported 00:35:24.747 I/O Commands 00:35:24.747 ------------ 00:35:24.747 Flush (00h): Supported 00:35:24.747 Write (01h): Supported LBA-Change 00:35:24.747 Read (02h): Supported 00:35:24.747 Write Zeroes (08h): Supported LBA-Change 00:35:24.747 Dataset Management (09h): Supported 00:35:24.747 00:35:24.747 Error Log 00:35:24.747 ========= 00:35:24.747 Entry: 0 00:35:24.747 Error Count: 0x3 00:35:24.747 Submission Queue Id: 0x0 00:35:24.747 Command Id: 0x5 00:35:24.747 Phase Bit: 0 00:35:24.747 Status Code: 0x2 00:35:24.747 Status Code Type: 0x0 00:35:24.747 Do Not Retry: 1 00:35:24.747 Error Location: 0x28 00:35:24.747 LBA: 0x0 00:35:24.747 Namespace: 0x0 00:35:24.747 Vendor Log Page: 0x0 00:35:24.747 ----------- 00:35:24.747 Entry: 1 00:35:24.747 Error Count: 0x2 00:35:24.747 Submission Queue Id: 0x0 00:35:24.747 Command Id: 0x5 00:35:24.747 Phase Bit: 0 00:35:24.747 Status Code: 0x2 00:35:24.747 Status Code Type: 0x0 00:35:24.747 Do Not Retry: 1 00:35:24.747 Error Location: 0x28 00:35:24.747 LBA: 0x0 00:35:24.747 Namespace: 0x0 00:35:24.747 Vendor Log Page: 0x0 00:35:24.747 ----------- 00:35:24.747 Entry: 2 00:35:24.747 Error Count: 0x1 00:35:24.747 Submission Queue Id: 0x0 00:35:24.747 Command Id: 0x4 00:35:24.747 Phase Bit: 0 00:35:24.747 Status Code: 0x2 00:35:24.747 Status Code Type: 0x0 00:35:24.747 Do Not Retry: 1 00:35:24.747 Error Location: 0x28 00:35:24.747 LBA: 0x0 00:35:24.747 Namespace: 0x0 00:35:24.747 Vendor Log Page: 0x0 00:35:24.747 00:35:24.747 Number of Queues 00:35:24.747 ================ 00:35:24.747 Number of I/O Submission Queues: 128 00:35:24.747 Number of I/O Completion Queues: 128 00:35:24.747 00:35:24.747 ZNS Specific Controller Data 00:35:24.747 ============================ 00:35:24.747 Zone Append Size Limit: 0 00:35:24.747 00:35:24.747 00:35:24.747 Active Namespaces 00:35:24.747 ================= 00:35:24.747 get_feature(0x05) failed 00:35:24.747 Namespace ID:1 00:35:24.747 Command Set Identifier: NVM (00h) 00:35:24.747 Deallocate: Supported 00:35:24.747 Deallocated/Unwritten Error: Not Supported 00:35:24.747 Deallocated Read Value: Unknown 00:35:24.747 Deallocate in Write Zeroes: Not Supported 00:35:24.747 Deallocated Guard Field: 0xFFFF 00:35:24.747 Flush: Supported 00:35:24.747 Reservation: Not Supported 00:35:24.747 Namespace Sharing Capabilities: Multiple Controllers 00:35:24.747 Size (in LBAs): 4194304 (2GiB) 00:35:24.747 Capacity (in LBAs): 4194304 (2GiB) 00:35:24.747 Utilization (in LBAs): 4194304 (2GiB) 00:35:24.747 UUID: 036f9786-8fef-413a-b8a8-4a4e6ea8e0fa 00:35:24.747 Thin Provisioning: Not Supported 00:35:24.747 Per-NS Atomic Units: Yes 00:35:24.748 Atomic Boundary Size (Normal): 0 00:35:24.748 Atomic Boundary Size (PFail): 0 00:35:24.748 Atomic Boundary Offset: 0 00:35:24.748 NGUID/EUI64 Never Reused: No 00:35:24.748 ANA group ID: 1 00:35:24.748 Namespace Write Protected: No 00:35:24.748 Number of LBA Formats: 1 00:35:24.748 Current LBA Format: LBA Format #00 00:35:24.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:24.748 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.748 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.748 rmmod nvme_tcp 00:35:25.007 rmmod nvme_fabrics 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.007 12:56:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:35:26.913 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:35:27.172 12:56:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:29.709 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:29.968 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:29.968 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:30.227 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:30.227 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:30.227 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:30.227 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:30.227 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:30.796 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:31.055 00:35:31.055 real 0m16.877s 00:35:31.055 user 0m4.517s 00:35:31.055 sys 0m8.765s 00:35:31.055 12:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:31.055 12:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:31.055 ************************************ 00:35:31.055 END TEST nvmf_identify_kernel_target 00:35:31.055 ************************************ 00:35:31.055 12:56:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:31.055 12:56:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:31.055 12:56:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:31.055 12:56:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.055 ************************************ 00:35:31.055 START TEST nvmf_auth_host 00:35:31.055 ************************************ 00:35:31.055 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:31.055 * Looking for test storage... 00:35:31.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:31.314 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:31.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.315 --rc genhtml_branch_coverage=1 00:35:31.315 --rc genhtml_function_coverage=1 00:35:31.315 --rc genhtml_legend=1 00:35:31.315 --rc geninfo_all_blocks=1 00:35:31.315 --rc geninfo_unexecuted_blocks=1 00:35:31.315 00:35:31.315 ' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:31.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.315 --rc genhtml_branch_coverage=1 00:35:31.315 --rc genhtml_function_coverage=1 00:35:31.315 --rc genhtml_legend=1 00:35:31.315 --rc geninfo_all_blocks=1 00:35:31.315 --rc geninfo_unexecuted_blocks=1 00:35:31.315 00:35:31.315 ' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:31.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.315 --rc genhtml_branch_coverage=1 00:35:31.315 --rc genhtml_function_coverage=1 00:35:31.315 --rc genhtml_legend=1 00:35:31.315 --rc geninfo_all_blocks=1 00:35:31.315 --rc geninfo_unexecuted_blocks=1 00:35:31.315 00:35:31.315 ' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:31.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.315 --rc genhtml_branch_coverage=1 00:35:31.315 --rc genhtml_function_coverage=1 00:35:31.315 --rc genhtml_legend=1 00:35:31.315 --rc geninfo_all_blocks=1 00:35:31.315 --rc geninfo_unexecuted_blocks=1 00:35:31.315 00:35:31.315 ' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.315 12:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:37.887 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:37.887 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:37.887 Found net devices under 0000:af:00.0: cvl_0_0 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:37.887 Found net devices under 0000:af:00.1: cvl_0_1 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.887 12:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.887 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:35:37.887 00:35:37.887 --- 10.0.0.2 ping statistics --- 00:35:37.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.887 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:35:37.887 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:35:37.887 00:35:37.888 --- 10.0.0.1 ping statistics --- 00:35:37.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.888 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=556276 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 556276 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 556276 ']' 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9a8275a993edc588ed4fb8b208e87e65 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.kQx 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9a8275a993edc588ed4fb8b208e87e65 0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9a8275a993edc588ed4fb8b208e87e65 0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9a8275a993edc588ed4fb8b208e87e65 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.kQx 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.kQx 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kQx 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e2fcb2a891c3a5aabd303e770a510fea6c0080325d592e6a132b873802b9eb6b 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.JRr 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e2fcb2a891c3a5aabd303e770a510fea6c0080325d592e6a132b873802b9eb6b 3 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e2fcb2a891c3a5aabd303e770a510fea6c0080325d592e6a132b873802b9eb6b 3 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e2fcb2a891c3a5aabd303e770a510fea6c0080325d592e6a132b873802b9eb6b 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.JRr 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.JRr 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.JRr 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ed8ee67121b8cb23f257a82c8cf6d14c829c1156f293d03e 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.muX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ed8ee67121b8cb23f257a82c8cf6d14c829c1156f293d03e 0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ed8ee67121b8cb23f257a82c8cf6d14c829c1156f293d03e 0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ed8ee67121b8cb23f257a82c8cf6d14c829c1156f293d03e 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.muX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.muX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.muX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9f04878c5128dd4ff2ce45261d6763acce107f1c83110249 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.yXL 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9f04878c5128dd4ff2ce45261d6763acce107f1c83110249 2 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9f04878c5128dd4ff2ce45261d6763acce107f1c83110249 2 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9f04878c5128dd4ff2ce45261d6763acce107f1c83110249 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.yXL 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.yXL 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yXL 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.888 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8d89d3c8b246d0ba3fd1d7a325792d93 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.VLB 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8d89d3c8b246d0ba3fd1d7a325792d93 1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8d89d3c8b246d0ba3fd1d7a325792d93 1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8d89d3c8b246d0ba3fd1d7a325792d93 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.VLB 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.VLB 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VLB 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7b98d069f1b5a5a01721ca84e8fc3d9f 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.WMV 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7b98d069f1b5a5a01721ca84e8fc3d9f 1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7b98d069f1b5a5a01721ca84e8fc3d9f 1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7b98d069f1b5a5a01721ca84e8fc3d9f 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.WMV 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.WMV 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.WMV 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c2f1043d5e6e99629b6cc17ccc3e0ebb44b914ad8114403d 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.2Xq 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c2f1043d5e6e99629b6cc17ccc3e0ebb44b914ad8114403d 2 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c2f1043d5e6e99629b6cc17ccc3e0ebb44b914ad8114403d 2 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c2f1043d5e6e99629b6cc17ccc3e0ebb44b914ad8114403d 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.2Xq 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.2Xq 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2Xq 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ceeca134d0c99797ff4539d0a8468774 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.vwA 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ceeca134d0c99797ff4539d0a8468774 0 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ceeca134d0c99797ff4539d0a8468774 0 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ceeca134d0c99797ff4539d0a8468774 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.vwA 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.vwA 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vwA 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=fa604a48618fbee952316170df174744a6567561f52f1e7df0638d36f799c47c 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.SKJ 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key fa604a48618fbee952316170df174744a6567561f52f1e7df0638d36f799c47c 3 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 fa604a48618fbee952316170df174744a6567561f52f1e7df0638d36f799c47c 3 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=fa604a48618fbee952316170df174744a6567561f52f1e7df0638d36f799c47c 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.SKJ 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.SKJ 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.SKJ 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 556276 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 556276 ']' 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.889 12:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kQx 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.JRr ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JRr 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.muX 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yXL ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yXL 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VLB 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.WMV ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WMV 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2Xq 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vwA ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vwA 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.SKJ 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:38.148 12:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:40.682 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:40.942 Waiting for block devices as requested 00:35:41.201 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:41.201 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:41.459 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:41.459 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:41.459 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:41.459 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:41.719 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:41.719 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:41.719 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:41.719 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:41.978 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:41.978 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:41.978 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:42.237 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:42.237 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:42.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:42.237 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:43.175 No valid GPT data, bailing 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:43.175 12:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:35:43.175 No valid GPT data, bailing 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n2 ]] 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n2 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ host-managed != none ]] 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # continue 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:43.175 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:43.175 00:35:43.175 Discovery Log Number of Records 2, Generation counter 2 00:35:43.175 =====Discovery Log Entry 0====== 00:35:43.175 trtype: tcp 00:35:43.175 adrfam: ipv4 00:35:43.175 subtype: current discovery subsystem 00:35:43.175 treq: not specified, sq flow control disable supported 00:35:43.175 portid: 1 00:35:43.175 trsvcid: 4420 00:35:43.175 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:43.175 traddr: 10.0.0.1 00:35:43.175 eflags: none 00:35:43.176 sectype: none 00:35:43.176 =====Discovery Log Entry 1====== 00:35:43.176 trtype: tcp 00:35:43.176 adrfam: ipv4 00:35:43.176 subtype: nvme subsystem 00:35:43.176 treq: not specified, sq flow control disable supported 00:35:43.176 portid: 1 00:35:43.176 trsvcid: 4420 00:35:43.176 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:43.176 traddr: 10.0.0.1 00:35:43.176 eflags: none 00:35:43.176 sectype: none 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.176 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.435 nvme0n1 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:43.435 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.436 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.695 nvme0n1 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:43.695 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.696 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.955 nvme0n1 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.955 12:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.214 nvme0n1 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.214 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.215 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.474 nvme0n1 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.474 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.475 nvme0n1 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.475 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.734 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.735 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.994 nvme0n1 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.994 12:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.994 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.995 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.995 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.995 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.995 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:44.995 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.995 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.253 nvme0n1 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.253 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.254 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.512 nvme0n1 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.512 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.771 nvme0n1 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.771 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.030 nvme0n1 00:35:46.030 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.030 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.030 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.030 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.030 12:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.030 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.598 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.599 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.599 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.599 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.858 nvme0n1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.859 12:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.118 nvme0n1 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.118 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.377 nvme0n1 00:35:47.377 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.377 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.377 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.377 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.377 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.377 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.637 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.896 nvme0n1 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.897 12:57:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.156 nvme0n1 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:48.156 12:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:49.533 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.534 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.792 nvme0n1 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.792 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.051 12:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.310 nvme0n1 00:35:50.310 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.310 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.310 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.311 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.879 nvme0n1 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.879 12:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.138 nvme0n1 00:35:51.138 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.139 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.139 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.139 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.139 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.139 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.398 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.657 nvme0n1 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.657 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.658 12:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.226 nvme0n1 00:35:52.226 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.226 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.226 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.226 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.226 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.226 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.485 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.053 nvme0n1 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.053 12:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:53.053 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:53.054 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:53.054 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.054 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.054 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.622 nvme0n1 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.622 12:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.190 nvme0n1 00:35:54.190 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.190 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.190 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.190 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.190 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.450 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.018 nvme0n1 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:55.018 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.019 12:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.278 nvme0n1 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.278 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.279 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.279 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.279 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.538 nvme0n1 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.538 nvme0n1 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.538 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.798 nvme0n1 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.798 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.799 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.058 12:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.058 nvme0n1 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.058 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.059 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.318 nvme0n1 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.318 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.319 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.319 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.319 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.319 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.319 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.319 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.578 nvme0n1 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.578 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.838 nvme0n1 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.838 12:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.097 nvme0n1 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:57.097 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.098 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.357 nvme0n1 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.357 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.616 nvme0n1 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.616 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.875 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.135 nvme0n1 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.135 12:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.135 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.394 nvme0n1 00:35:58.394 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.394 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.394 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.394 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.394 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.394 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.395 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.654 nvme0n1 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.654 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.655 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.914 nvme0n1 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.914 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.915 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.915 12:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.483 nvme0n1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.483 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.051 nvme0n1 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.051 12:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.311 nvme0n1 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.311 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 nvme0n1 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:00.880 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.881 12:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.139 nvme0n1 00:36:01.139 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.139 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.139 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.139 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.139 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.139 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.399 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.968 nvme0n1 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:01.968 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.969 12:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.537 nvme0n1 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.537 12:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.105 nvme0n1 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.105 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.364 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.365 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.933 nvme0n1 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:03.933 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.934 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:03.934 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:03.934 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:03.934 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:03.934 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.934 12:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.502 nvme0n1 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.502 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.762 nvme0n1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.762 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.022 nvme0n1 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.022 12:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.022 nvme0n1 00:36:05.022 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.022 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.022 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.022 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.022 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:05.281 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.282 nvme0n1 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.282 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:05.541 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.542 nvme0n1 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.542 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.801 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.802 nvme0n1 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.802 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.061 12:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.062 nvme0n1 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.062 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.321 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.321 nvme0n1 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.322 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.581 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:06.581 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.581 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:06.581 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:06.581 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.582 nvme0n1 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.582 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.841 nvme0n1 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.841 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.842 12:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.101 nvme0n1 00:36:07.101 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.101 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.101 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.101 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.101 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.101 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.360 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 nvme0n1 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.620 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 nvme0n1 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.880 12:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.140 nvme0n1 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.140 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.399 nvme0n1 00:36:08.399 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.399 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.399 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.399 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.399 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.399 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:08.659 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.660 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.919 nvme0n1 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:08.919 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.920 12:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.492 nvme0n1 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.492 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.751 nvme0n1 00:36:09.751 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.751 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.751 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.751 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.751 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.751 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.010 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:10.011 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:10.011 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:10.011 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:10.011 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.011 12:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.270 nvme0n1 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.270 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.839 nvme0n1 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4Mjc1YTk5M2VkYzU4OGVkNGZiOGIyMDhlODdlNjVrdLbM: 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJmY2IyYTg5MWMzYTVhYWJkMzAzZTc3MGE1MTBmZWE2YzAwODAzMjVkNTkyZTZhMTMyYjg3MzgwMmI5ZWI2Yjic3b0=: 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.839 12:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.407 nvme0n1 00:36:11.407 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.407 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.407 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.407 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.407 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.407 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.408 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.975 nvme0n1 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.975 12:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.975 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.542 nvme0n1 00:36:12.542 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.542 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.542 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.542 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.542 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.542 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJmMTA0M2Q1ZTZlOTk2MjliNmNjMTdjY2MzZTBlYmI0NGI5MTRhZDgxMTQ0MDNkxeiWmA==: 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2VlY2ExMzRkMGM5OTc5N2ZmNDUzOWQwYTg0Njg3NzSlTak2: 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.802 12:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.371 nvme0n1 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE2MDRhNDg2MThmYmVlOTUyMzE2MTcwZGYxNzQ3NDRhNjU2NzU2MWY1MmYxZTdkZjA2MzhkMzZmNzk5YzQ3Y2uKyBY=: 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.371 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.372 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.940 nvme0n1 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:13.940 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:13.941 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:13.941 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:13.941 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:13.941 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:13.941 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.941 12:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.200 request: 00:36:14.200 { 00:36:14.200 "name": "nvme0", 00:36:14.200 "trtype": "tcp", 00:36:14.200 "traddr": "10.0.0.1", 00:36:14.200 "adrfam": "ipv4", 00:36:14.200 "trsvcid": "4420", 00:36:14.200 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:14.200 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:14.200 "prchk_reftag": false, 00:36:14.200 "prchk_guard": false, 00:36:14.200 "hdgst": false, 00:36:14.200 "ddgst": false, 00:36:14.200 "allow_unrecognized_csi": false, 00:36:14.200 "method": "bdev_nvme_attach_controller", 00:36:14.200 "req_id": 1 00:36:14.200 } 00:36:14.200 Got JSON-RPC error response 00:36:14.200 response: 00:36:14.200 { 00:36:14.200 "code": -5, 00:36:14.200 "message": "Input/output error" 00:36:14.200 } 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.200 request: 00:36:14.200 { 00:36:14.200 "name": "nvme0", 00:36:14.200 "trtype": "tcp", 00:36:14.200 "traddr": "10.0.0.1", 00:36:14.200 "adrfam": "ipv4", 00:36:14.200 "trsvcid": "4420", 00:36:14.200 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:14.200 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:14.200 "prchk_reftag": false, 00:36:14.200 "prchk_guard": false, 00:36:14.200 "hdgst": false, 00:36:14.200 "ddgst": false, 00:36:14.200 "dhchap_key": "key2", 00:36:14.200 "allow_unrecognized_csi": false, 00:36:14.200 "method": "bdev_nvme_attach_controller", 00:36:14.200 "req_id": 1 00:36:14.200 } 00:36:14.200 Got JSON-RPC error response 00:36:14.200 response: 00:36:14.200 { 00:36:14.200 "code": -5, 00:36:14.200 "message": "Input/output error" 00:36:14.200 } 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:14.200 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.201 request: 00:36:14.201 { 00:36:14.201 "name": "nvme0", 00:36:14.201 "trtype": "tcp", 00:36:14.201 "traddr": "10.0.0.1", 00:36:14.201 "adrfam": "ipv4", 00:36:14.201 "trsvcid": "4420", 00:36:14.201 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:14.201 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:14.201 "prchk_reftag": false, 00:36:14.201 "prchk_guard": false, 00:36:14.201 "hdgst": false, 00:36:14.201 "ddgst": false, 00:36:14.201 "dhchap_key": "key1", 00:36:14.201 "dhchap_ctrlr_key": "ckey2", 00:36:14.201 "allow_unrecognized_csi": false, 00:36:14.201 "method": "bdev_nvme_attach_controller", 00:36:14.201 "req_id": 1 00:36:14.201 } 00:36:14.201 Got JSON-RPC error response 00:36:14.201 response: 00:36:14.201 { 00:36:14.201 "code": -5, 00:36:14.201 "message": "Input/output error" 00:36:14.201 } 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.201 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.460 nvme0n1 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:14.460 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.461 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:14.461 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.461 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.720 request: 00:36:14.720 { 00:36:14.720 "name": "nvme0", 00:36:14.720 "dhchap_key": "key1", 00:36:14.720 "dhchap_ctrlr_key": "ckey2", 00:36:14.720 "method": "bdev_nvme_set_keys", 00:36:14.720 "req_id": 1 00:36:14.720 } 00:36:14.720 Got JSON-RPC error response 00:36:14.720 response: 00:36:14.720 { 00:36:14.720 "code": -13, 00:36:14.720 "message": "Permission denied" 00:36:14.720 } 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:14.720 12:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:15.655 12:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.033 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ4ZWU2NzEyMWI4Y2IyM2YyNTdhODJjOGNmNmQxNGM4MjljMTE1NmYyOTNkMDNlp3D+nw==: 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYwNDg3OGM1MTI4ZGQ0ZmYyY2U0NTI2MWQ2NzYzYWNjZTEwN2YxYzgzMTEwMjQ5I7YJUQ==: 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.034 nvme0n1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ4OWQzYzhiMjQ2ZDBiYTNmZDFkN2EzMjU3OTJkOTMgQeOJ: 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I5OGQwNjlmMWI1YTVhMDE3MjFjYTg0ZThmYzNkOWZ2zsnb: 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.034 request: 00:36:17.034 { 00:36:17.034 "name": "nvme0", 00:36:17.034 "dhchap_key": "key2", 00:36:17.034 "dhchap_ctrlr_key": "ckey1", 00:36:17.034 "method": "bdev_nvme_set_keys", 00:36:17.034 "req_id": 1 00:36:17.034 } 00:36:17.034 Got JSON-RPC error response 00:36:17.034 response: 00:36:17.034 { 00:36:17.034 "code": -13, 00:36:17.034 "message": "Permission denied" 00:36:17.034 } 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:17.034 12:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:17.971 12:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.971 12:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:17.971 12:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.971 12:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.971 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.230 rmmod nvme_tcp 00:36:18.230 rmmod nvme_fabrics 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 556276 ']' 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 556276 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 556276 ']' 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 556276 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 556276 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 556276' 00:36:18.230 killing process with pid 556276 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 556276 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 556276 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:18.230 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:18.489 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:36:18.489 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:18.489 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:36:18.489 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.489 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:18.489 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.490 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.490 12:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:36:20.395 12:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:22.931 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:23.499 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:23.499 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:23.757 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:24.324 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:24.583 12:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kQx /tmp/spdk.key-null.muX /tmp/spdk.key-sha256.VLB /tmp/spdk.key-sha384.2Xq /tmp/spdk.key-sha512.SKJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:24.583 12:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:27.117 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:27.394 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:27.394 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:27.394 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:27.653 00:36:27.654 real 0m56.499s 00:36:27.654 user 0m51.408s 00:36:27.654 sys 0m12.760s 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.654 ************************************ 00:36:27.654 END TEST nvmf_auth_host 00:36:27.654 ************************************ 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.654 ************************************ 00:36:27.654 START TEST nvmf_digest 00:36:27.654 ************************************ 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:27.654 * Looking for test storage... 00:36:27.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:36:27.654 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.914 --rc genhtml_branch_coverage=1 00:36:27.914 --rc genhtml_function_coverage=1 00:36:27.914 --rc genhtml_legend=1 00:36:27.914 --rc geninfo_all_blocks=1 00:36:27.914 --rc geninfo_unexecuted_blocks=1 00:36:27.914 00:36:27.914 ' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.914 --rc genhtml_branch_coverage=1 00:36:27.914 --rc genhtml_function_coverage=1 00:36:27.914 --rc genhtml_legend=1 00:36:27.914 --rc geninfo_all_blocks=1 00:36:27.914 --rc geninfo_unexecuted_blocks=1 00:36:27.914 00:36:27.914 ' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.914 --rc genhtml_branch_coverage=1 00:36:27.914 --rc genhtml_function_coverage=1 00:36:27.914 --rc genhtml_legend=1 00:36:27.914 --rc geninfo_all_blocks=1 00:36:27.914 --rc geninfo_unexecuted_blocks=1 00:36:27.914 00:36:27.914 ' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.914 --rc genhtml_branch_coverage=1 00:36:27.914 --rc genhtml_function_coverage=1 00:36:27.914 --rc genhtml_legend=1 00:36:27.914 --rc geninfo_all_blocks=1 00:36:27.914 --rc geninfo_unexecuted_blocks=1 00:36:27.914 00:36:27.914 ' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:27.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.914 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:27.915 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:27.915 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:27.915 12:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:33.331 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:33.331 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:33.331 Found net devices under 0000:af:00.0: cvl_0_0 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:33.331 Found net devices under 0000:af:00.1: cvl_0_1 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:33.331 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:33.332 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:33.617 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:33.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:36:33.617 00:36:33.617 --- 10.0.0.2 ping statistics --- 00:36:33.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.617 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:33.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:36:33.618 00:36:33.618 --- 10.0.0.1 ping statistics --- 00:36:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.618 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:33.618 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.898 ************************************ 00:36:33.898 START TEST nvmf_digest_clean 00:36:33.898 ************************************ 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=570701 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 570701 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 570701 ']' 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.898 [2024-12-16 12:57:59.766741] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:33.898 [2024-12-16 12:57:59.766786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.898 [2024-12-16 12:57:59.838541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.898 [2024-12-16 12:57:59.876278] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.898 [2024-12-16 12:57:59.876318] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.898 [2024-12-16 12:57:59.876325] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.898 [2024-12-16 12:57:59.876331] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.898 [2024-12-16 12:57:59.876337] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.898 [2024-12-16 12:57:59.876357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:33.898 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.167 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.167 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:34.167 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:34.167 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:34.167 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.167 12:57:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.167 null0 00:36:34.167 [2024-12-16 12:58:00.053950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.167 [2024-12-16 12:58:00.078165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.167 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.167 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:34.167 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:34.167 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:34.167 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=570898 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 570898 /var/tmp/bperf.sock 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 570898 ']' 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:34.168 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.168 [2024-12-16 12:58:00.130725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:34.168 [2024-12-16 12:58:00.130769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570898 ] 00:36:34.168 [2024-12-16 12:58:00.197878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.426 [2024-12-16 12:58:00.237902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.426 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:34.426 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:34.427 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:34.427 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:34.427 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:34.685 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.685 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.944 nvme0n1 00:36:34.944 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:34.944 12:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.944 Running I/O for 2 seconds... 00:36:36.817 25479.00 IOPS, 99.53 MiB/s [2024-12-16T11:58:02.884Z] 25504.00 IOPS, 99.62 MiB/s 00:36:36.817 Latency(us) 00:36:36.817 [2024-12-16T11:58:02.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.817 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:36.817 nvme0n1 : 2.00 25522.09 99.70 0.00 0.00 5010.17 2559.02 11546.82 00:36:36.817 [2024-12-16T11:58:02.884Z] =================================================================================================================== 00:36:36.817 [2024-12-16T11:58:02.884Z] Total : 25522.09 99.70 0.00 0.00 5010.17 2559.02 11546.82 00:36:37.076 { 00:36:37.076 "results": [ 00:36:37.076 { 00:36:37.076 "job": "nvme0n1", 00:36:37.076 "core_mask": "0x2", 00:36:37.076 "workload": "randread", 00:36:37.076 "status": "finished", 00:36:37.076 "queue_depth": 128, 00:36:37.076 "io_size": 4096, 00:36:37.076 "runtime": 2.003598, 00:36:37.076 "iops": 25522.085767703902, 00:36:37.076 "mibps": 99.69564753009337, 00:36:37.076 "io_failed": 0, 00:36:37.076 "io_timeout": 0, 00:36:37.076 "avg_latency_us": 5010.165335240479, 00:36:37.076 "min_latency_us": 2559.024761904762, 00:36:37.076 "max_latency_us": 11546.819047619048 00:36:37.076 } 00:36:37.076 ], 00:36:37.076 "core_count": 1 00:36:37.076 } 00:36:37.076 12:58:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:37.076 12:58:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:37.076 12:58:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:37.076 12:58:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:37.076 | select(.opcode=="crc32c") 00:36:37.076 | "\(.module_name) \(.executed)"' 00:36:37.076 12:58:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 570898 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 570898 ']' 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 570898 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:37.076 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 570898 00:36:37.337 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 570898' 00:36:37.338 killing process with pid 570898 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 570898 00:36:37.338 Received shutdown signal, test time was about 2.000000 seconds 00:36:37.338 00:36:37.338 Latency(us) 00:36:37.338 [2024-12-16T11:58:03.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.338 [2024-12-16T11:58:03.405Z] =================================================================================================================== 00:36:37.338 [2024-12-16T11:58:03.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 570898 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=571358 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 571358 /var/tmp/bperf.sock 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 571358 ']' 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:37.338 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:37.338 [2024-12-16 12:58:03.369789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:37.338 [2024-12-16 12:58:03.369838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571358 ] 00:36:37.338 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:37.338 Zero copy mechanism will not be used. 00:36:37.597 [2024-12-16 12:58:03.438065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.597 [2024-12-16 12:58:03.477557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.597 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:37.597 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:37.597 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:37.597 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:37.597 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:37.856 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:37.856 12:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:38.115 nvme0n1 00:36:38.115 12:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:38.115 12:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:38.374 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:38.374 Zero copy mechanism will not be used. 00:36:38.374 Running I/O for 2 seconds... 00:36:40.246 5818.00 IOPS, 727.25 MiB/s [2024-12-16T11:58:06.313Z] 5786.50 IOPS, 723.31 MiB/s 00:36:40.246 Latency(us) 00:36:40.246 [2024-12-16T11:58:06.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.246 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:40.246 nvme0n1 : 2.00 5787.05 723.38 0.00 0.00 2762.15 635.86 4962.01 00:36:40.246 [2024-12-16T11:58:06.313Z] =================================================================================================================== 00:36:40.246 [2024-12-16T11:58:06.313Z] Total : 5787.05 723.38 0.00 0.00 2762.15 635.86 4962.01 00:36:40.246 { 00:36:40.246 "results": [ 00:36:40.246 { 00:36:40.246 "job": "nvme0n1", 00:36:40.246 "core_mask": "0x2", 00:36:40.246 "workload": "randread", 00:36:40.246 "status": "finished", 00:36:40.246 "queue_depth": 16, 00:36:40.246 "io_size": 131072, 00:36:40.246 "runtime": 2.002575, 00:36:40.246 "iops": 5787.049174188232, 00:36:40.246 "mibps": 723.381146773529, 00:36:40.246 "io_failed": 0, 00:36:40.246 "io_timeout": 0, 00:36:40.246 "avg_latency_us": 2762.151145955319, 00:36:40.246 "min_latency_us": 635.8552380952381, 00:36:40.246 "max_latency_us": 4962.011428571429 00:36:40.246 } 00:36:40.246 ], 00:36:40.246 "core_count": 1 00:36:40.246 } 00:36:40.246 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:40.246 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:40.246 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:40.246 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:40.246 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:40.246 | select(.opcode=="crc32c") 00:36:40.246 | "\(.module_name) \(.executed)"' 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 571358 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 571358 ']' 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 571358 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 571358 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 571358' 00:36:40.505 killing process with pid 571358 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 571358 00:36:40.505 Received shutdown signal, test time was about 2.000000 seconds 00:36:40.505 00:36:40.505 Latency(us) 00:36:40.505 [2024-12-16T11:58:06.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.505 [2024-12-16T11:58:06.572Z] =================================================================================================================== 00:36:40.505 [2024-12-16T11:58:06.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:40.505 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 571358 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=571820 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 571820 /var/tmp/bperf.sock 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 571820 ']' 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:40.764 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:40.764 [2024-12-16 12:58:06.754345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:40.764 [2024-12-16 12:58:06.754397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571820 ] 00:36:40.764 [2024-12-16 12:58:06.822297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.023 [2024-12-16 12:58:06.860345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.023 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:41.023 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:41.023 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:41.023 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:41.023 12:58:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:41.282 12:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.282 12:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.542 nvme0n1 00:36:41.542 12:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:41.542 12:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.800 Running I/O for 2 seconds... 00:36:43.671 27429.00 IOPS, 107.14 MiB/s [2024-12-16T11:58:09.738Z] 27534.50 IOPS, 107.56 MiB/s 00:36:43.671 Latency(us) 00:36:43.671 [2024-12-16T11:58:09.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.671 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:43.671 nvme0n1 : 2.01 27536.70 107.57 0.00 0.00 4639.68 3464.05 10485.76 00:36:43.671 [2024-12-16T11:58:09.738Z] =================================================================================================================== 00:36:43.671 [2024-12-16T11:58:09.738Z] Total : 27536.70 107.57 0.00 0.00 4639.68 3464.05 10485.76 00:36:43.671 { 00:36:43.671 "results": [ 00:36:43.671 { 00:36:43.671 "job": "nvme0n1", 00:36:43.671 "core_mask": "0x2", 00:36:43.671 "workload": "randwrite", 00:36:43.671 "status": "finished", 00:36:43.671 "queue_depth": 128, 00:36:43.671 "io_size": 4096, 00:36:43.671 "runtime": 2.005651, 00:36:43.671 "iops": 27536.695068085126, 00:36:43.671 "mibps": 107.56521510970752, 00:36:43.671 "io_failed": 0, 00:36:43.671 "io_timeout": 0, 00:36:43.671 "avg_latency_us": 4639.677447579731, 00:36:43.671 "min_latency_us": 3464.0457142857144, 00:36:43.671 "max_latency_us": 10485.76 00:36:43.671 } 00:36:43.671 ], 00:36:43.671 "core_count": 1 00:36:43.671 } 00:36:43.671 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:43.671 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:43.671 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:43.671 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:43.671 | select(.opcode=="crc32c") 00:36:43.671 | "\(.module_name) \(.executed)"' 00:36:43.671 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:43.929 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:43.929 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 571820 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 571820 ']' 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 571820 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 571820 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 571820' 00:36:43.930 killing process with pid 571820 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 571820 00:36:43.930 Received shutdown signal, test time was about 2.000000 seconds 00:36:43.930 00:36:43.930 Latency(us) 00:36:43.930 [2024-12-16T11:58:09.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.930 [2024-12-16T11:58:09.997Z] =================================================================================================================== 00:36:43.930 [2024-12-16T11:58:09.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:43.930 12:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 571820 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=572489 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 572489 /var/tmp/bperf.sock 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 572489 ']' 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:44.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:44.189 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:44.189 [2024-12-16 12:58:10.186906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:44.189 [2024-12-16 12:58:10.186956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572489 ] 00:36:44.189 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:44.189 Zero copy mechanism will not be used. 00:36:44.448 [2024-12-16 12:58:10.256290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.448 [2024-12-16 12:58:10.296112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.448 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:44.448 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:44.448 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:44.448 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:44.448 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:44.707 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.707 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.966 nvme0n1 00:36:44.966 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:44.966 12:58:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:44.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:44.966 Zero copy mechanism will not be used. 00:36:44.966 Running I/O for 2 seconds... 00:36:47.278 6362.00 IOPS, 795.25 MiB/s [2024-12-16T11:58:13.345Z] 6473.50 IOPS, 809.19 MiB/s 00:36:47.278 Latency(us) 00:36:47.278 [2024-12-16T11:58:13.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.278 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:47.278 nvme0n1 : 2.00 6471.83 808.98 0.00 0.00 2468.09 1786.64 9611.95 00:36:47.278 [2024-12-16T11:58:13.345Z] =================================================================================================================== 00:36:47.278 [2024-12-16T11:58:13.345Z] Total : 6471.83 808.98 0.00 0.00 2468.09 1786.64 9611.95 00:36:47.278 { 00:36:47.278 "results": [ 00:36:47.278 { 00:36:47.278 "job": "nvme0n1", 00:36:47.278 "core_mask": "0x2", 00:36:47.278 "workload": "randwrite", 00:36:47.278 "status": "finished", 00:36:47.278 "queue_depth": 16, 00:36:47.278 "io_size": 131072, 00:36:47.278 "runtime": 2.003605, 00:36:47.278 "iops": 6471.834518280799, 00:36:47.278 "mibps": 808.9793147850999, 00:36:47.278 "io_failed": 0, 00:36:47.278 "io_timeout": 0, 00:36:47.278 "avg_latency_us": 2468.093793549193, 00:36:47.278 "min_latency_us": 1786.6361904761904, 00:36:47.278 "max_latency_us": 9611.946666666667 00:36:47.278 } 00:36:47.278 ], 00:36:47.278 "core_count": 1 00:36:47.278 } 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:47.278 | select(.opcode=="crc32c") 00:36:47.278 | "\(.module_name) \(.executed)"' 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 572489 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 572489 ']' 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 572489 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 572489 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 572489' 00:36:47.278 killing process with pid 572489 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 572489 00:36:47.278 Received shutdown signal, test time was about 2.000000 seconds 00:36:47.278 00:36:47.278 Latency(us) 00:36:47.278 [2024-12-16T11:58:13.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.278 [2024-12-16T11:58:13.345Z] =================================================================================================================== 00:36:47.278 [2024-12-16T11:58:13.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:47.278 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 572489 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 570701 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 570701 ']' 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 570701 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 570701 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 570701' 00:36:47.537 killing process with pid 570701 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 570701 00:36:47.537 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 570701 00:36:47.796 00:36:47.796 real 0m13.951s 00:36:47.796 user 0m26.571s 00:36:47.796 sys 0m4.590s 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:47.796 ************************************ 00:36:47.796 END TEST nvmf_digest_clean 00:36:47.796 ************************************ 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:47.796 ************************************ 00:36:47.796 START TEST nvmf_digest_error 00:36:47.796 ************************************ 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:36:47.796 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=572973 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 572973 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 572973 ']' 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:47.797 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.797 [2024-12-16 12:58:13.759894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:47.797 [2024-12-16 12:58:13.759940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.797 [2024-12-16 12:58:13.832219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.056 [2024-12-16 12:58:13.871233] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.056 [2024-12-16 12:58:13.871273] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.056 [2024-12-16 12:58:13.871281] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.056 [2024-12-16 12:58:13.871286] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.056 [2024-12-16 12:58:13.871292] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.056 [2024-12-16 12:58:13.871326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.056 [2024-12-16 12:58:13.947788] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.056 12:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.056 null0 00:36:48.056 [2024-12-16 12:58:14.037074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.056 [2024-12-16 12:58:14.061271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=573001 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 573001 /var/tmp/bperf.sock 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 573001 ']' 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.056 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:48.057 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.057 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:48.057 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.057 [2024-12-16 12:58:14.112206] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:48.057 [2024-12-16 12:58:14.112247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573001 ] 00:36:48.316 [2024-12-16 12:58:14.180291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.316 [2024-12-16 12:58:14.220336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.316 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:48.316 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:48.316 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:48.316 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:48.574 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:48.574 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.574 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.574 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.574 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:48.574 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.143 nvme0n1 00:36:49.143 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:49.143 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.143 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.143 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.143 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:49.143 12:58:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:49.143 Running I/O for 2 seconds... 00:36:49.143 [2024-12-16 12:58:15.084581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.084618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.084628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.093877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.093901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.093910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.102527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.102548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.102558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.110943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.110964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.110972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.120336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.120356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.120365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.130198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.130218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.130227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.140415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.140435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.140443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.150217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.150235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.150243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.159200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.159220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.159228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.168234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.168253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.168261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.178959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.178979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.178987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.187651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.187672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.187680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.143 [2024-12-16 12:58:15.197272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.143 [2024-12-16 12:58:15.197291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.143 [2024-12-16 12:58:15.197299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.209067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.209087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.209095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.220237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.220257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.220266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.227780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.227799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.227808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.238897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.238916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.238924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.250088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.250108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.250124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.258219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.258239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.258246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.269767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.269785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.269793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.282380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.282398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.282406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.293818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.293838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.293846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.305038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.305058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.305066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.313896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.313916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.313924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.325579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.325599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.325607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.335644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.335666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.335674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.344213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.402 [2024-12-16 12:58:15.344235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.402 [2024-12-16 12:58:15.344243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.402 [2024-12-16 12:58:15.353752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.353771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.353779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.363866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.363884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.363892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.376298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.376317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.376325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.384408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.384427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.384435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.395821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.395840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.395849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.407089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.407108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.407122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.419546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.419564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.419572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.430961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.430979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.430987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.442012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.442031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.442038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.450449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.450467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.450475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.403 [2024-12-16 12:58:15.462754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.403 [2024-12-16 12:58:15.462773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.403 [2024-12-16 12:58:15.462781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.475059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.475080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.475088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.485169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.485196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.493816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.493836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.493844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.506328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.506349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.506357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.517443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.517465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.517473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.526019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.526040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.526053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.536305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.536325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.536333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.545301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.545321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.545330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.557819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.557838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.557846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.566130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.566149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.566157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.577702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.577721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.577729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.589609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.589629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.589637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.598228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.598247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.598255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.608262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.608281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.608289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.618531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.618550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.618558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-16 12:58:15.628268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.662 [2024-12-16 12:58:15.628287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-16 12:58:15.628295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.636924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.636943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.636951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.646037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.646056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.646063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.655821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.655839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.655847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.664953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.664980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.674300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.674318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.674325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.683186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.683207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.683215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.692926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.692947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.692959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.701876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.701896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.701904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.712147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.712167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.712174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.663 [2024-12-16 12:58:15.720310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.663 [2024-12-16 12:58:15.720331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.663 [2024-12-16 12:58:15.720339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.731416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.731437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.731445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.742094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.742120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.742129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.749987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.750006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.750014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.759960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.759980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.759988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.770550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.770569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.770577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.782634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.782657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.782665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.792437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.792455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.792463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.800584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.800604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.800612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.811528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.811547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.811554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.823260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.823280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.823288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.922 [2024-12-16 12:58:15.831363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.922 [2024-12-16 12:58:15.831382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-16 12:58:15.831390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.842709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.842729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.842737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.853585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.853604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.853612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.867005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.867025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.867033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.875136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.875155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.875163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.886261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.886280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.886287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.897277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.897296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.897304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.907981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.908000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.908007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.916784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.916804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.927378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.927398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.927406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.936092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.936111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.936124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.947963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.947982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.947990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.959960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.959979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.959990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.969310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.969328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.969336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.923 [2024-12-16 12:58:15.979130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:49.923 [2024-12-16 12:58:15.979149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.923 [2024-12-16 12:58:15.979157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:15.990041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:15.990061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:15.990069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:15.998591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:15.998610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:15.998619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.009800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.009819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.009827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.018055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.018074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.018082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.029548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.029568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.029575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.039846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.039864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.039872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.048680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.048703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.048711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.058010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.058028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.058036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 24913.00 IOPS, 97.32 MiB/s [2024-12-16T11:58:16.250Z] [2024-12-16 12:58:16.068636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.068655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.068662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.078216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.078235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.078243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.086968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.086988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.086996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.096995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.097014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.097022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.105597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.105617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.105625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.116660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.116680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.116687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.124727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.124746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.124756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.135138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.135158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.135166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.144490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.144509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.144517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.154683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.154701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.154709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.164423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.164442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.164449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.174188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.174208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.174215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.185057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.185076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.185084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.195456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.195475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.195483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.204370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.204388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.204396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.215465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.183 [2024-12-16 12:58:16.215488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.183 [2024-12-16 12:58:16.215495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.183 [2024-12-16 12:58:16.223129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.184 [2024-12-16 12:58:16.223148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.184 [2024-12-16 12:58:16.223156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.184 [2024-12-16 12:58:16.234735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.184 [2024-12-16 12:58:16.234754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.184 [2024-12-16 12:58:16.234762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.184 [2024-12-16 12:58:16.246405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.184 [2024-12-16 12:58:16.246424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.184 [2024-12-16 12:58:16.246432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.258720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.258739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.258747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.267727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.267746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.267755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.278842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.278861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.278868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.287413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.287432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.287440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.299970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.299989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.299997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.312546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.312565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.312573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.323193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.443 [2024-12-16 12:58:16.323212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.443 [2024-12-16 12:58:16.323220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.443 [2024-12-16 12:58:16.332377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.332394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.332402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.343666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.343686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.343695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.351798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.362647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.362666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.362674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.372870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.372889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.372898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.382318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.382335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.382343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.390737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.390755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.401287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.401306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.401313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.410805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.410824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.410832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.419312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.419331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.419339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.428831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.428850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.428858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.438796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.438815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.438823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.447898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.447917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.447925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.457149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.457168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.457176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.466622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.466641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.466648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.476600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.476622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.476630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.485013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.485032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.485040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.495318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.495338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.495345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.444 [2024-12-16 12:58:16.506488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.444 [2024-12-16 12:58:16.506508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.444 [2024-12-16 12:58:16.506516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.514696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.514717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.514725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.524649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.524668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.524676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.535498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.535516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.535524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.547654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.547674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.547681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.555673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.555692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.555699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.567069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.567088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.567096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.575741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.704 [2024-12-16 12:58:16.575760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.704 [2024-12-16 12:58:16.575768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.704 [2024-12-16 12:58:16.587040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.587059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.587067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.596343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.596362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.596370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.607874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.607893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.607901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.620498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.620517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.620525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.628741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.628759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.628767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.640985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.641003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.641011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.648644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.648662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.648673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.659295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.659314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.659322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.668211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.668230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.668238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.678449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.678468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.678476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.686502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.686521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.686529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.696905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.696925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.696933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.706181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.706200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.706209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.716123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.716144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.716152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.725227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.725246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.725254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.734521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.734540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.734548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.743195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.743221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.752949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.752968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.752976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.705 [2024-12-16 12:58:16.762969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.705 [2024-12-16 12:58:16.762988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.705 [2024-12-16 12:58:16.762996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.771630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.771649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.771656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.784130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.784149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.794558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.794576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.794584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.802633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.802651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.802659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.813493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.813511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.813525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.826357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.826376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.826384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.837195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.837214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.837222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.846547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.846565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.846573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.856003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.856022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.856030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.865912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.865931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.865938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.965 [2024-12-16 12:58:16.876310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.965 [2024-12-16 12:58:16.876329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.965 [2024-12-16 12:58:16.876337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.885018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.885037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.885044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.895769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.895789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.895797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.905680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.905701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.905709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.914377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.914395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.914403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.926525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.926544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.926552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.938856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.938876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.947006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.947025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.947032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.957611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.957631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.957639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.968271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.968292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.968299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.979800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.979820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.979829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.988676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.988694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.988702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:16.999308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:16.999328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:16.999335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:17.009618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:17.009637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:17.009645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.966 [2024-12-16 12:58:17.019606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:50.966 [2024-12-16 12:58:17.019626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.966 [2024-12-16 12:58:17.019634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.225 [2024-12-16 12:58:17.031166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:51.225 [2024-12-16 12:58:17.031186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.225 [2024-12-16 12:58:17.031194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.225 [2024-12-16 12:58:17.040213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:51.225 [2024-12-16 12:58:17.040233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.225 [2024-12-16 12:58:17.040241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.225 [2024-12-16 12:58:17.048451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:51.226 [2024-12-16 12:58:17.048471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.226 [2024-12-16 12:58:17.048478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.226 [2024-12-16 12:58:17.058778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:51.226 [2024-12-16 12:58:17.058798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.226 [2024-12-16 12:58:17.058806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.226 [2024-12-16 12:58:17.066861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb06b20) 00:36:51.226 [2024-12-16 12:58:17.066881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.226 [2024-12-16 12:58:17.066889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.226 25249.50 IOPS, 98.63 MiB/s 00:36:51.226 Latency(us) 00:36:51.226 [2024-12-16T11:58:17.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.226 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:51.226 nvme0n1 : 2.01 25250.86 98.64 0.00 0.00 5062.90 2527.82 18225.25 00:36:51.226 [2024-12-16T11:58:17.293Z] =================================================================================================================== 00:36:51.226 [2024-12-16T11:58:17.293Z] Total : 25250.86 98.64 0.00 0.00 5062.90 2527.82 18225.25 00:36:51.226 { 00:36:51.226 "results": [ 00:36:51.226 { 00:36:51.226 "job": "nvme0n1", 00:36:51.226 "core_mask": "0x2", 00:36:51.226 "workload": "randread", 00:36:51.226 "status": "finished", 00:36:51.226 "queue_depth": 128, 00:36:51.226 "io_size": 4096, 00:36:51.226 "runtime": 2.006308, 00:36:51.226 "iops": 25250.85879137201, 00:36:51.226 "mibps": 98.63616715379692, 00:36:51.226 "io_failed": 0, 00:36:51.226 "io_timeout": 0, 00:36:51.226 "avg_latency_us": 5062.899492856814, 00:36:51.226 "min_latency_us": 2527.8171428571427, 00:36:51.226 "max_latency_us": 18225.249523809525 00:36:51.226 } 00:36:51.226 ], 00:36:51.226 "core_count": 1 00:36:51.226 } 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:51.226 | .driver_specific 00:36:51.226 | .nvme_error 00:36:51.226 | .status_code 00:36:51.226 | .command_transient_transport_error' 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 573001 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 573001 ']' 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 573001 00:36:51.226 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 573001 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 573001' 00:36:51.485 killing process with pid 573001 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 573001 00:36:51.485 Received shutdown signal, test time was about 2.000000 seconds 00:36:51.485 00:36:51.485 Latency(us) 00:36:51.485 [2024-12-16T11:58:17.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.485 [2024-12-16T11:58:17.552Z] =================================================================================================================== 00:36:51.485 [2024-12-16T11:58:17.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 573001 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=573662 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 573662 /var/tmp/bperf.sock 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 573662 ']' 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:51.485 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:51.744 [2024-12-16 12:58:17.561429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:51.744 [2024-12-16 12:58:17.561481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573662 ] 00:36:51.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:51.744 Zero copy mechanism will not be used. 00:36:51.744 [2024-12-16 12:58:17.629957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.744 [2024-12-16 12:58:17.669417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.744 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:51.744 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:51.744 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.744 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:52.003 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:52.003 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.003 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.003 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.003 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.003 12:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.262 nvme0n1 00:36:52.522 12:58:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:52.522 12:58:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.522 12:58:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.522 12:58:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.522 12:58:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.522 12:58:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.522 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:52.522 Zero copy mechanism will not be used. 00:36:52.522 Running I/O for 2 seconds... 00:36:52.522 [2024-12-16 12:58:18.435224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.435257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.439544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.439567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.439576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.443795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.443815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.443824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.447971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.447991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.448000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.452318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.452337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.452345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.457556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.457577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.457585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.463160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.463180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.463189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.468850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.468872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.468880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.475212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.475236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.475245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.481325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.481364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.481373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.486460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.486482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.486490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.491625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.491647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.491655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.497540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.497562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.497570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.503136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.503157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.503166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.509193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.509215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.514694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.514719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.514727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.521846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.521869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.521883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.528668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.528689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.528698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.534380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.534401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.534410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.541005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.541027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.522 [2024-12-16 12:58:18.541035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.522 [2024-12-16 12:58:18.546992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.522 [2024-12-16 12:58:18.547013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.547022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.552441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.552464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.552473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.557774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.557795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.557803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.562914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.562933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.562941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.568141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.568161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.573556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.573581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.573589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.579181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.579202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.579210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.523 [2024-12-16 12:58:18.585285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.523 [2024-12-16 12:58:18.585306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.523 [2024-12-16 12:58:18.585314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.590982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.591003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.591013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.596147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.596167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.596175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.599170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.599190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.599199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.604411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.604431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.604439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.609754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.609774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.609782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.615135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.615156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.615164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.621069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.621091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.621099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.626401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.626421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.626429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.631659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.631680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.631688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.637212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.637232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.637240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.642592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.642612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.642620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.648026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.648046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.648054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.653542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.653563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.653571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.658965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.658985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.658992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.664292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.664312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.669550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.669570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.669578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.674698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.674719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.674727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.679905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.679925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.679933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.685046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.685066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.685074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.690335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.690356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.690364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.695629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.695649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.695657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.700990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.701011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.701019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.706470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.706491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.783 [2024-12-16 12:58:18.706499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.783 [2024-12-16 12:58:18.711996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.783 [2024-12-16 12:58:18.712023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.712031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.717496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.717516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.717524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.722822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.722843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.722850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.728916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.728938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.728946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.734741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.734761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.734770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.741761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.741782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.741790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.749568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.749589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.749597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.756527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.756548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.756556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.764000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.764022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.764031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.770655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.770677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.770685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.777442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.777464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.777472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.784919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.784940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.784948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.792198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.792220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.792229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.798039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.798060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.798068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.803450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.803471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.803479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.808854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.808875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.808883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.814997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.815019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.815026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.820575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.820595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.820606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.825838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.825858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.825866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.831255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.831275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.831283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.836700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.836720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.836728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.841971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.841992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.841999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.784 [2024-12-16 12:58:18.847174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:52.784 [2024-12-16 12:58:18.847194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.784 [2024-12-16 12:58:18.847202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.043 [2024-12-16 12:58:18.852442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.043 [2024-12-16 12:58:18.852463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.043 [2024-12-16 12:58:18.852471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.043 [2024-12-16 12:58:18.857762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.043 [2024-12-16 12:58:18.857783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.043 [2024-12-16 12:58:18.857791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.043 [2024-12-16 12:58:18.863103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.043 [2024-12-16 12:58:18.863128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.043 [2024-12-16 12:58:18.863136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.043 [2024-12-16 12:58:18.868647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.043 [2024-12-16 12:58:18.868667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.043 [2024-12-16 12:58:18.868675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.043 [2024-12-16 12:58:18.873949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.873968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.873976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.879345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.879365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.879374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.884131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.884151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.884159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.889423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.889443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.889450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.894698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.894719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.894727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.899874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.899894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.899902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.905107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.905132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.905140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.910395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.910416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.910427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.915687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.915706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.915714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.920965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.920985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.920993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.925924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.925945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.925953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.931068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.931089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.931096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.936300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.936321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.936329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.941635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.941655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.941664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.947037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.947058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.947066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.952660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.952680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.952688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.958056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.958080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.958088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.963487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.963507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.963515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.968295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.968314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.968322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.973103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.973127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.973136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.979194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.979214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.979223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.985719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.985738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.985746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.992023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.992043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.992051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:18.997970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:18.997990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:18.997999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:19.003981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:19.004000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:19.004007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:19.008915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:19.008934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:19.008943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.044 [2024-12-16 12:58:19.014305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.044 [2024-12-16 12:58:19.014326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.044 [2024-12-16 12:58:19.014334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.019819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.019839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.019847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.025558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.025577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.025585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.030952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.030972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.030980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.036656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.036676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.036684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.042038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.042057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.042065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.047544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.047563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.047571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.052926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.052945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.052957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.058545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.058564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.058572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.064042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.064062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.064069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.069626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.069646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.069654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.075180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.075200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.075208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.080450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.080469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.080477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.085434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.085453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.090420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.090439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.090447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.096211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.096231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.096239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.101516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.101540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.101548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.045 [2024-12-16 12:58:19.106918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.045 [2024-12-16 12:58:19.106939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.045 [2024-12-16 12:58:19.106947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.112604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.112625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.112633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.118066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.118086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.118094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.123421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.123441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.123449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.128819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.128839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.128847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.134265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.134285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.134293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.139718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.139738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.139746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.305 [2024-12-16 12:58:19.145212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.305 [2024-12-16 12:58:19.145231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.305 [2024-12-16 12:58:19.145239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.150828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.150848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.150856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.156283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.156303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.156311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.161659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.161677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.161685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.166952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.166972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.166980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.172199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.172218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.172226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.177428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.177448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.177455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.182621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.182641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.182649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.187834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.187854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.187862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.193382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.193403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.193414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.199002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.199022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.199030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.204300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.204320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.204328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.209724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.209744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.215106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.215132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.215141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.220423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.220443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.220451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.225773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.225793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.225801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.231260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.231278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.231286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.236685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.236704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.236712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.241927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.241947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.241956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.247267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.247286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.247294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.252806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.252826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.252835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.258840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.258860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.258868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.264372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.264393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.264401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.269688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.269709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.269717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.275083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.275103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.275111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.280423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.280443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.280451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.286120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.286141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.306 [2024-12-16 12:58:19.286152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.306 [2024-12-16 12:58:19.291577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.306 [2024-12-16 12:58:19.291598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.291606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.297079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.297099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.297107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.302830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.302851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.302859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.309218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.309238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.309247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.315517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.315537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.315545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.321187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.321206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.321215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.326767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.326787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.326795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.332680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.332701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.332709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.338419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.338443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.338451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.344586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.344607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.344615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.350092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.350120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.350129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.355313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.355346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.355355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.360753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.360774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.360782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.307 [2024-12-16 12:58:19.366223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.307 [2024-12-16 12:58:19.366243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.307 [2024-12-16 12:58:19.366251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.371134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.371156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.371165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.376155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.376175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.376184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.381400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.381420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.381428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.386541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.386561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.391878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.391898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.391906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.397045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.397065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.397074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.402308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.402329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.402337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.407691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.407712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.407719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.413029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.413049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.413057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.418327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.418347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.418355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.423619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.423639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.423647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.428902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.428936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.567 5639.00 IOPS, 704.88 MiB/s [2024-12-16T11:58:19.634Z] [2024-12-16 12:58:19.435378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.435399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.435406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.441223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.441247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.448808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.448830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.567 [2024-12-16 12:58:19.448838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.567 [2024-12-16 12:58:19.455802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.567 [2024-12-16 12:58:19.455823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.455831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.462573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.462594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.462602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.468662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.468684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.468692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.472262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.472281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.472289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.476510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.476531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.476538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.481864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.481884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.481892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.487083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.487103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.487111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.492325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.492345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.492353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.497507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.497527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.497535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.503502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.503523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.503531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.509776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.509796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.509804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.517852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.517874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.517881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.525845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.525867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.525875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.534080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.534102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.534118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.542476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.542498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.542506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.551063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.551084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.551092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.558513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.558536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.558546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.565754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.565776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.565784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.574548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.574570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.574579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.583079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.583100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.583108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.590775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.590796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.590805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.598336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.598358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.598366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.606272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.606297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.606305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.613966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.613988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.613996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.620698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.620720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.620728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.625830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.625852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.625860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.568 [2024-12-16 12:58:19.631064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.568 [2024-12-16 12:58:19.631085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.568 [2024-12-16 12:58:19.631093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.636539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.636561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.636569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.643238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.643259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.643268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.649596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.649616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.649625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.652946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.652973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.660241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.660262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.660270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.667840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.667861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.667869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.675790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.675811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.675820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.683394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.683415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.683423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.691373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.691394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.691403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.699416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.699437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.828 [2024-12-16 12:58:19.699445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.828 [2024-12-16 12:58:19.707384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.828 [2024-12-16 12:58:19.707406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.707414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.715399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.715420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.715429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.723486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.723507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.723519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.731487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.731508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.731516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.739060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.739081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.739090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.746755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.746785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.754247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.754268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.754276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.761477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.761498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.761507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.768222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.768243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.768251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.776486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.776507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.776515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.783012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.783033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.783041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.789282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.789307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.789315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.794915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.794936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.794944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.800267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.800296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.805944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.805965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.805973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.811336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.811358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.811366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.816138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.816158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.816166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.821376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.821396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.821405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.826704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.826726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.826734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.833024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.833046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.833054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.838803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.838826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.838834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.844738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.844760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.844769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.850793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.850819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.850827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.857213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.857237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.857245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.864244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.864265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.864274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.870807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.870831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.870839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.829 [2024-12-16 12:58:19.876956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.829 [2024-12-16 12:58:19.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.829 [2024-12-16 12:58:19.876987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.830 [2024-12-16 12:58:19.882695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.830 [2024-12-16 12:58:19.882717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.830 [2024-12-16 12:58:19.882726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.830 [2024-12-16 12:58:19.888977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:53.830 [2024-12-16 12:58:19.888999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.830 [2024-12-16 12:58:19.889011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.090 [2024-12-16 12:58:19.895670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.090 [2024-12-16 12:58:19.895692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.090 [2024-12-16 12:58:19.895700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.090 [2024-12-16 12:58:19.901766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.090 [2024-12-16 12:58:19.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.090 [2024-12-16 12:58:19.901797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.090 [2024-12-16 12:58:19.908591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.090 [2024-12-16 12:58:19.908613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.090 [2024-12-16 12:58:19.908621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.090 [2024-12-16 12:58:19.914336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.090 [2024-12-16 12:58:19.914357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.090 [2024-12-16 12:58:19.914365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.090 [2024-12-16 12:58:19.919512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.090 [2024-12-16 12:58:19.919533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.090 [2024-12-16 12:58:19.919541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.925175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.925196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.925204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.929939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.929960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.929968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.933492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.933512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.933520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.939022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.939046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.939054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.944929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.944951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.944960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.950414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.950436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.950445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.956475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.956496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.956504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.961759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.961780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.961789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.966422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.966443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.966451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.971056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.971076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.971084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.975602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.975622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.975630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.980172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.980192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.980204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.984774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.984796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.984804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.989413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.989433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.989442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.994131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.994151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.994159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:19.998831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:19.998852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:19.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.003699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.003720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.003728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.009316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.009337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.009345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.014443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.014464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.014473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.019213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.019233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.019241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.023900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.023925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.023933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.028914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.028936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.028944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.035401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.035421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.035430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.040108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.040135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.040143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.044811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.044832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.044840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.049440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.049461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.049469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.054006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.091 [2024-12-16 12:58:20.054027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.091 [2024-12-16 12:58:20.054034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.091 [2024-12-16 12:58:20.058671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.058693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.058701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.063373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.063394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.063402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.068073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.068093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.068102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.073031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.073052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.073060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.077727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.077748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.077757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.082389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.082409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.082417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.087078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.087098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.087106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.091773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.091794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.091801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.096419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.096439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.096447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.101165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.101186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.101194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.105940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.105961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.105974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.110692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.110713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.110721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.115374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.115394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.115402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.120111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.120138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.120146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.125678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.125698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.125707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.130655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.130675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.130683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.135318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.135338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.135346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.140006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.140027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.140035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.144739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.144760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.144768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.092 [2024-12-16 12:58:20.149971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.092 [2024-12-16 12:58:20.149996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.092 [2024-12-16 12:58:20.150004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.352 [2024-12-16 12:58:20.155705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.352 [2024-12-16 12:58:20.155728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.352 [2024-12-16 12:58:20.155737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.352 [2024-12-16 12:58:20.160999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.352 [2024-12-16 12:58:20.161020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.352 [2024-12-16 12:58:20.161029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.352 [2024-12-16 12:58:20.167090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.352 [2024-12-16 12:58:20.167111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.352 [2024-12-16 12:58:20.167127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.352 [2024-12-16 12:58:20.172466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.352 [2024-12-16 12:58:20.172487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.352 [2024-12-16 12:58:20.172495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.352 [2024-12-16 12:58:20.177252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.177273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.177281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.182033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.182054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.182062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.186777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.186798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.186806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.191500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.191520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.191529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.196251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.196271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.196279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.200975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.200995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.201004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.205542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.205563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.205571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.210127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.210147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.210156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.214898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.214919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.214927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.219583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.219603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.224190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.224210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.224219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.228926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.228947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.228954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.233625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.233648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.233656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.238332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.238352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.238361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.242961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.242982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.242991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.247612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.247633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.247641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.252304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.252325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.252334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.256564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.256585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.256593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.259636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.259673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.263337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.263357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.263365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.267891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.267912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.267920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.272105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.272131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.272139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.276727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.276747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.276755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.281285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.281306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.281314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.285875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.285894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.285902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.290456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.290476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.290484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.295086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.353 [2024-12-16 12:58:20.295106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.353 [2024-12-16 12:58:20.295119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.353 [2024-12-16 12:58:20.299608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.299628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.299636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.304212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.304233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.304241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.308760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.308780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.308792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.313510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.313531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.313539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.319324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.319346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.319354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.324300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.324322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.324341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.328907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.328927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.328935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.333446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.333466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.333474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.337975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.337995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.338003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.342523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.342543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.342551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.347184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.347204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.347212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.351808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.351832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.351840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.356256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.356276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.356284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.361631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.361651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.361660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.366108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.366138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.366146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.370683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.370702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.370711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.375342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.375363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.375371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.379996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.380016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.380024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.384523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.384544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.384552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.389040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.389060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.389068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.393590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.393611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.398132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.398153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.398161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.402707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.402728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.402737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.407278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.407298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.407306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.411855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.411875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.411883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.354 [2024-12-16 12:58:20.416459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.354 [2024-12-16 12:58:20.416479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.354 [2024-12-16 12:58:20.416487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.614 [2024-12-16 12:58:20.421066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.614 [2024-12-16 12:58:20.421088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.614 [2024-12-16 12:58:20.421096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.614 [2024-12-16 12:58:20.425749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.614 [2024-12-16 12:58:20.425769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.614 [2024-12-16 12:58:20.425777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.614 [2024-12-16 12:58:20.431266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.614 [2024-12-16 12:58:20.431286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.614 [2024-12-16 12:58:20.431298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.614 5625.50 IOPS, 703.19 MiB/s [2024-12-16T11:58:20.681Z] [2024-12-16 12:58:20.437844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd90) 00:36:54.614 [2024-12-16 12:58:20.437864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.614 [2024-12-16 12:58:20.437872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.614 00:36:54.614 Latency(us) 00:36:54.614 [2024-12-16T11:58:20.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.614 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:54.614 nvme0n1 : 2.00 5625.15 703.14 0.00 0.00 2841.33 631.95 8925.38 00:36:54.614 [2024-12-16T11:58:20.681Z] =================================================================================================================== 00:36:54.614 [2024-12-16T11:58:20.681Z] Total : 5625.15 703.14 0.00 0.00 2841.33 631.95 8925.38 00:36:54.614 { 00:36:54.614 "results": [ 00:36:54.614 { 00:36:54.614 "job": "nvme0n1", 00:36:54.614 "core_mask": "0x2", 00:36:54.614 "workload": "randread", 00:36:54.614 "status": "finished", 00:36:54.614 "queue_depth": 16, 00:36:54.614 "io_size": 131072, 00:36:54.614 "runtime": 2.00297, 00:36:54.614 "iops": 5625.146657214037, 00:36:54.614 "mibps": 703.1433321517546, 00:36:54.614 "io_failed": 0, 00:36:54.614 "io_timeout": 0, 00:36:54.614 "avg_latency_us": 2841.3279296047876, 00:36:54.614 "min_latency_us": 631.9542857142857, 00:36:54.614 "max_latency_us": 8925.379047619048 00:36:54.614 } 00:36:54.614 ], 00:36:54.614 "core_count": 1 00:36:54.614 } 00:36:54.614 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.614 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.614 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.614 | .driver_specific 00:36:54.614 | .nvme_error 00:36:54.614 | .status_code 00:36:54.614 | .command_transient_transport_error' 00:36:54.614 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.614 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 363 > 0 )) 00:36:54.614 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 573662 00:36:54.615 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 573662 ']' 00:36:54.615 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 573662 00:36:54.615 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:54.615 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.615 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 573662 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 573662' 00:36:54.875 killing process with pid 573662 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 573662 00:36:54.875 Received shutdown signal, test time was about 2.000000 seconds 00:36:54.875 00:36:54.875 Latency(us) 00:36:54.875 [2024-12-16T11:58:20.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.875 [2024-12-16T11:58:20.942Z] =================================================================================================================== 00:36:54.875 [2024-12-16T11:58:20.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 573662 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=574137 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 574137 /var/tmp/bperf.sock 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 574137 ']' 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:54.875 12:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.875 [2024-12-16 12:58:20.933022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:54.875 [2024-12-16 12:58:20.933070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574137 ] 00:36:55.135 [2024-12-16 12:58:21.001638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.135 [2024-12-16 12:58:21.041147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.135 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:55.135 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:55.135 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.135 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.394 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:55.394 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.394 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.394 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.394 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.394 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.654 nvme0n1 00:36:55.654 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:55.654 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.654 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.915 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.915 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:55.915 12:58:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.915 Running I/O for 2 seconds... 00:36:55.915 [2024-12-16 12:58:21.816670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f9f68 00:36:55.915 [2024-12-16 12:58:21.817534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.817560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.826471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e9e10 00:36:55.915 [2024-12-16 12:58:21.827401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.827422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.836130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e88f8 00:36:55.915 [2024-12-16 12:58:21.837182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.837201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.845412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f0788 00:36:55.915 [2024-12-16 12:58:21.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.846481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.854453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e01f8 00:36:55.915 [2024-12-16 12:58:21.855051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.855070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.864025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e1710 00:36:55.915 [2024-12-16 12:58:21.864789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.864808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.872376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fa3a0 00:36:55.915 [2024-12-16 12:58:21.873231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.873253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.881659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fef90 00:36:55.915 [2024-12-16 12:58:21.882589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.915 [2024-12-16 12:58:21.882606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:55.915 [2024-12-16 12:58:21.891164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc560 00:36:55.915 [2024-12-16 12:58:21.892201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.892219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.899651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198eb760 00:36:55.916 [2024-12-16 12:58:21.900390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.900408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.908639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5658 00:36:55.916 [2024-12-16 12:58:21.909366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.909384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.917754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198df988 00:36:55.916 [2024-12-16 12:58:21.918447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.918465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.926787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e49b0 00:36:55.916 [2024-12-16 12:58:21.927477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.927494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.936124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e38d0 00:36:55.916 [2024-12-16 12:58:21.936641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.936659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.945760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ef6a8 00:36:55.916 [2024-12-16 12:58:21.946450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.946468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.955344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198dfdc0 00:36:55.916 [2024-12-16 12:58:21.956073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.956091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.964014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f4298 00:36:55.916 [2024-12-16 12:58:21.965263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.965282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:55.916 [2024-12-16 12:58:21.972445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f31b8 00:36:55.916 [2024-12-16 12:58:21.973134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.916 [2024-12-16 12:58:21.973168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:21.982164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2948 00:36:56.176 [2024-12-16 12:58:21.983022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:21.983040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:21.990743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2510 00:36:56.176 [2024-12-16 12:58:21.991453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:21.991470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.001273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f7970 00:36:56.176 [2024-12-16 12:58:22.002350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.002380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.010547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e9e10 00:36:56.176 [2024-12-16 12:58:22.011627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.011645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.019632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fac10 00:36:56.176 [2024-12-16 12:58:22.020694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.020712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.028728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198de470 00:36:56.176 [2024-12-16 12:58:22.029816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.029835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.037825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5a90 00:36:56.176 [2024-12-16 12:58:22.038947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.038965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.046049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ef6a8 00:36:56.176 [2024-12-16 12:58:22.047500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.047517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.055429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ecc78 00:36:56.176 [2024-12-16 12:58:22.056448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.056465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.064739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e2c28 00:36:56.176 [2024-12-16 12:58:22.065820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.176 [2024-12-16 12:58:22.065838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:56.176 [2024-12-16 12:58:22.073710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e49b0 00:36:56.177 [2024-12-16 12:58:22.074804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.074822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.083219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3060 00:36:56.177 [2024-12-16 12:58:22.083823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.083841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.094397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ec840 00:36:56.177 [2024-12-16 12:58:22.095970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.095987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.101189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ed920 00:36:56.177 [2024-12-16 12:58:22.102006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.102023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.112520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ff3c8 00:36:56.177 [2024-12-16 12:58:22.113831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.113852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.121188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ea248 00:36:56.177 [2024-12-16 12:58:22.122353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.122371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.130383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ebfd0 00:36:56.177 [2024-12-16 12:58:22.131383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.131401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.139051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc560 00:36:56.177 [2024-12-16 12:58:22.139835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.139853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.148728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f5378 00:36:56.177 [2024-12-16 12:58:22.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.149832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.159793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe720 00:36:56.177 [2024-12-16 12:58:22.161403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.161420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.166391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f4298 00:36:56.177 [2024-12-16 12:58:22.167289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.167307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.177503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f9b30 00:36:56.177 [2024-12-16 12:58:22.178866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.178883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.184205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ddc00 00:36:56.177 [2024-12-16 12:58:22.184871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.193767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f4b08 00:36:56.177 [2024-12-16 12:58:22.194551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.194568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.204766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e95a0 00:36:56.177 [2024-12-16 12:58:22.205924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.205941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.214355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f31b8 00:36:56.177 [2024-12-16 12:58:22.215772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.215790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.221119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198dfdc0 00:36:56.177 [2024-12-16 12:58:22.221795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.221812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:56.177 [2024-12-16 12:58:22.232137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f57b0 00:36:56.177 [2024-12-16 12:58:22.233320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.177 [2024-12-16 12:58:22.233338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:56.437 [2024-12-16 12:58:22.241879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f92c0 00:36:56.437 [2024-12-16 12:58:22.243222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.437 [2024-12-16 12:58:22.243240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:56.437 [2024-12-16 12:58:22.250369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe2e8 00:36:56.437 [2024-12-16 12:58:22.251628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.437 [2024-12-16 12:58:22.251646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:56.437 [2024-12-16 12:58:22.258224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ec408 00:36:56.437 [2024-12-16 12:58:22.258911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.437 [2024-12-16 12:58:22.258928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:56.437 [2024-12-16 12:58:22.267753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e73e0 00:36:56.437 [2024-12-16 12:58:22.268605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.437 [2024-12-16 12:58:22.268622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:56.437 [2024-12-16 12:58:22.276966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ec840 00:36:56.437 [2024-12-16 12:58:22.277788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.437 [2024-12-16 12:58:22.277805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.286310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198dfdc0 00:36:56.438 [2024-12-16 12:58:22.287126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.287160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.295373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc560 00:36:56.438 [2024-12-16 12:58:22.296356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.296384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.304691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3498 00:36:56.438 [2024-12-16 12:58:22.305175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.305192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.314933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e0630 00:36:56.438 [2024-12-16 12:58:22.316148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.316166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.322555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e27f0 00:36:56.438 [2024-12-16 12:58:22.323193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.323211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.331691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f96f8 00:36:56.438 [2024-12-16 12:58:22.332403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.332420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.342318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc128 00:36:56.438 [2024-12-16 12:58:22.343192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.343210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.351121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f5378 00:36:56.438 [2024-12-16 12:58:22.352471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.352491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.359009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ed920 00:36:56.438 [2024-12-16 12:58:22.359651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.359668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.368093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fcdd0 00:36:56.438 [2024-12-16 12:58:22.368748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.368766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.378912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fcdd0 00:36:56.438 [2024-12-16 12:58:22.380137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.380155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.387316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fa7d8 00:36:56.438 [2024-12-16 12:58:22.388371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.388388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.396444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6b70 00:36:56.438 [2024-12-16 12:58:22.397307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.397324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.405885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe2e8 00:36:56.438 [2024-12-16 12:58:22.406833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.406851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.414975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e4140 00:36:56.438 [2024-12-16 12:58:22.416092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.416109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.426262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6300 00:36:56.438 [2024-12-16 12:58:22.427852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.427869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.432736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f0bc0 00:36:56.438 [2024-12-16 12:58:22.433477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.441365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6b70 00:36:56.438 [2024-12-16 12:58:22.442082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.442098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.450619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198eb760 00:36:56.438 [2024-12-16 12:58:22.451379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.459498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe720 00:36:56.438 [2024-12-16 12:58:22.460242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.460260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.469665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ff3c8 00:36:56.438 [2024-12-16 12:58:22.470424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.470441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.480854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e2c28 00:36:56.438 [2024-12-16 12:58:22.482392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.482409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.487267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fb048 00:36:56.438 [2024-12-16 12:58:22.488027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.488045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:56.438 [2024-12-16 12:58:22.497717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f6cc8 00:36:56.438 [2024-12-16 12:58:22.498921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.438 [2024-12-16 12:58:22.498939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.506356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5ec8 00:36:56.699 [2024-12-16 12:58:22.507280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.507298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.515519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ed0b0 00:36:56.699 [2024-12-16 12:58:22.516400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.516417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.525074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f0bc0 00:36:56.699 [2024-12-16 12:58:22.526173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.526190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.533476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e12d8 00:36:56.699 [2024-12-16 12:58:22.534389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.534407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.542585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e0a68 00:36:56.699 [2024-12-16 12:58:22.543352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.543370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.554291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fcdd0 00:36:56.699 [2024-12-16 12:58:22.555914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.555931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.560764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fb8b8 00:36:56.699 [2024-12-16 12:58:22.561533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.561550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.570437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198feb58 00:36:56.699 [2024-12-16 12:58:22.571348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.571367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.580577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f3e60 00:36:56.699 [2024-12-16 12:58:22.581783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.581800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.589632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198feb58 00:36:56.699 [2024-12-16 12:58:22.590404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.590425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.598521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198eaab8 00:36:56.699 [2024-12-16 12:58:22.599841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.599860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.606440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5220 00:36:56.699 [2024-12-16 12:58:22.607213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.607230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.617732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e38d0 00:36:56.699 [2024-12-16 12:58:22.618991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.619009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.627361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f0350 00:36:56.699 [2024-12-16 12:58:22.628741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.628769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.634171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e38d0 00:36:56.699 [2024-12-16 12:58:22.634801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.634818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.643405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6738 00:36:56.699 [2024-12-16 12:58:22.644055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.644073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.654144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2948 00:36:56.699 [2024-12-16 12:58:22.655171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.655189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.663257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ff3c8 00:36:56.699 [2024-12-16 12:58:22.664390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.664408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.672763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f4298 00:36:56.699 [2024-12-16 12:58:22.674047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.674064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.682257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f8a50 00:36:56.699 [2024-12-16 12:58:22.683589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.683607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.691788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe2e8 00:36:56.699 [2024-12-16 12:58:22.693309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.693326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:56.699 [2024-12-16 12:58:22.698302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198feb58 00:36:56.699 [2024-12-16 12:58:22.698979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.699 [2024-12-16 12:58:22.698997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:56.700 [2024-12-16 12:58:22.709965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e1f80 00:36:56.700 [2024-12-16 12:58:22.711489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.700 [2024-12-16 12:58:22.711507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:56.700 [2024-12-16 12:58:22.716705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f46d0 00:36:56.700 [2024-12-16 12:58:22.717512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.700 [2024-12-16 12:58:22.717530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:56.700 [2024-12-16 12:58:22.727874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e95a0 00:36:56.700 [2024-12-16 12:58:22.729130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.700 [2024-12-16 12:58:22.729164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:56.700 [2024-12-16 12:58:22.736162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc560 00:36:56.700 [2024-12-16 12:58:22.737418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.700 [2024-12-16 12:58:22.737437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:56.700 [2024-12-16 12:58:22.744007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2510 00:36:56.700 [2024-12-16 12:58:22.744582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.700 [2024-12-16 12:58:22.744600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:56.700 [2024-12-16 12:58:22.753907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2510 00:36:56.700 [2024-12-16 12:58:22.754493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.700 [2024-12-16 12:58:22.754511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:56.960 [2024-12-16 12:58:22.765426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e49b0 00:36:56.960 [2024-12-16 12:58:22.766787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.960 [2024-12-16 12:58:22.766805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:56.960 [2024-12-16 12:58:22.772160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f8a50 00:36:56.960 [2024-12-16 12:58:22.772758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.960 [2024-12-16 12:58:22.772776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:56.960 [2024-12-16 12:58:22.781794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f8a50 00:36:56.960 [2024-12-16 12:58:22.782381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.960 [2024-12-16 12:58:22.782399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:56.960 [2024-12-16 12:58:22.791185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2510 00:36:56.960 [2024-12-16 12:58:22.791988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.960 [2024-12-16 12:58:22.792007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:56.960 [2024-12-16 12:58:22.800363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5a90 00:36:56.960 [2024-12-16 12:58:22.801077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.960 [2024-12-16 12:58:22.801095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:56.960 27686.00 IOPS, 108.15 MiB/s [2024-12-16T11:58:23.027Z] [2024-12-16 12:58:22.811036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e4578 00:36:56.960 [2024-12-16 12:58:22.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.960 [2024-12-16 12:58:22.812037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:56.960 [2024-12-16 12:58:22.820828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e4578 00:36:56.961 [2024-12-16 12:58:22.821791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.821809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.830526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f7970 00:36:56.961 [2024-12-16 12:58:22.831833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.831855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.839389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f57b0 00:36:56.961 [2024-12-16 12:58:22.840375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.840393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.848678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e0ea0 00:36:56.961 [2024-12-16 12:58:22.849523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.849541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.858045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f20d8 00:36:56.961 [2024-12-16 12:58:22.858890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.858908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.866782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e7818 00:36:56.961 [2024-12-16 12:58:22.867590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.875406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e0ea0 00:36:56.961 [2024-12-16 12:58:22.876018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.876035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.885141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe2e8 00:36:56.961 [2024-12-16 12:58:22.885864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.885882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.894706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f6458 00:36:56.961 [2024-12-16 12:58:22.895567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.895585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.903888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3498 00:36:56.961 [2024-12-16 12:58:22.904700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.904719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.913381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ea680 00:36:56.961 [2024-12-16 12:58:22.914536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.914554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.922682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e0a68 00:36:56.961 [2024-12-16 12:58:22.923401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.923419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.931292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc560 00:36:56.961 [2024-12-16 12:58:22.932522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.932540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.941360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ee5c8 00:36:56.961 [2024-12-16 12:58:22.942446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.942465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.951048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2948 00:36:56.961 [2024-12-16 12:58:22.952481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.952499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.960623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f3e60 00:36:56.961 [2024-12-16 12:58:22.962177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.962194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.967208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198de8a8 00:36:56.961 [2024-12-16 12:58:22.967957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.967974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.978029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198de8a8 00:36:56.961 [2024-12-16 12:58:22.979242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.979260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.985905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e9168 00:36:56.961 [2024-12-16 12:58:22.986641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.986659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:22.995317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f7970 00:36:56.961 [2024-12-16 12:58:22.996253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:22.996271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:23.003777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f4f40 00:36:56.961 [2024-12-16 12:58:23.004513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:23.004531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:23.013153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e49b0 00:36:56.961 [2024-12-16 12:58:23.013860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:23.013878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:56.961 [2024-12-16 12:58:23.022812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f0ff8 00:36:56.961 [2024-12-16 12:58:23.023758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.961 [2024-12-16 12:58:23.023776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.032541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2948 00:36:57.222 [2024-12-16 12:58:23.033712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.033730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.041730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f6020 00:36:57.222 [2024-12-16 12:58:23.042451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.042469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.050065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f6890 00:36:57.222 [2024-12-16 12:58:23.050893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.050912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.060837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e23b8 00:36:57.222 [2024-12-16 12:58:23.062258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.062276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.070149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc128 00:36:57.222 [2024-12-16 12:58:23.071473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.071494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.077454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f96f8 00:36:57.222 [2024-12-16 12:58:23.078361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.078378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.086347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f7538 00:36:57.222 [2024-12-16 12:58:23.086805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.086823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.097000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f9f68 00:36:57.222 [2024-12-16 12:58:23.098155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.098174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.105816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ebb98 00:36:57.222 [2024-12-16 12:58:23.106884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.106903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.116055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2948 00:36:57.222 [2024-12-16 12:58:23.117385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.117402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.123579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3498 00:36:57.222 [2024-12-16 12:58:23.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.124627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.132784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e0a68 00:36:57.222 [2024-12-16 12:58:23.133362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.133380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.142339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f1430 00:36:57.222 [2024-12-16 12:58:23.143012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.143029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.150898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6b70 00:36:57.222 [2024-12-16 12:58:23.152117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.152135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.158733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fe720 00:36:57.222 [2024-12-16 12:58:23.159404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.159422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.170043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e2c28 00:36:57.222 [2024-12-16 12:58:23.171237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.171254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.178604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198de038 00:36:57.222 [2024-12-16 12:58:23.179456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.179474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.189411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e73e0 00:36:57.222 [2024-12-16 12:58:23.190843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.190860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.196204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198dfdc0 00:36:57.222 [2024-12-16 12:58:23.196865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.196883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.205663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f3e60 00:36:57.222 [2024-12-16 12:58:23.206469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.206486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.217110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f8618 00:36:57.222 [2024-12-16 12:58:23.218439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.218456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.225594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e38d0 00:36:57.222 [2024-12-16 12:58:23.226431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.226448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.233945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f8618 00:36:57.222 [2024-12-16 12:58:23.234871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.234888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:57.222 [2024-12-16 12:58:23.245223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e7c50 00:36:57.222 [2024-12-16 12:58:23.246614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.222 [2024-12-16 12:58:23.246632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:57.223 [2024-12-16 12:58:23.254440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f2510 00:36:57.223 [2024-12-16 12:58:23.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.223 [2024-12-16 12:58:23.255902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:57.223 [2024-12-16 12:58:23.262412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198feb58 00:36:57.223 [2024-12-16 12:58:23.263305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.223 [2024-12-16 12:58:23.263322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:57.223 [2024-12-16 12:58:23.270797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3d08 00:36:57.223 [2024-12-16 12:58:23.271776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.223 [2024-12-16 12:58:23.271794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:57.223 [2024-12-16 12:58:23.279988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fc998 00:36:57.223 [2024-12-16 12:58:23.280521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.223 [2024-12-16 12:58:23.280539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.291806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fd640 00:36:57.484 [2024-12-16 12:58:23.293378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.293396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.298219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5220 00:36:57.484 [2024-12-16 12:58:23.298946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.298963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.308909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6738 00:36:57.484 [2024-12-16 12:58:23.309906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.309926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.317274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6738 00:36:57.484 [2024-12-16 12:58:23.318339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.318356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.326404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e5a90 00:36:57.484 [2024-12-16 12:58:23.327035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.327052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.335961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ea248 00:36:57.484 [2024-12-16 12:58:23.336739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.344726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fac10 00:36:57.484 [2024-12-16 12:58:23.346045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.346063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.354140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fdeb0 00:36:57.484 [2024-12-16 12:58:23.355145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.355163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.365070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198edd58 00:36:57.484 [2024-12-16 12:58:23.366640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.366656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.371614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f92c0 00:36:57.484 [2024-12-16 12:58:23.372529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.372547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.382766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ea248 00:36:57.484 [2024-12-16 12:58:23.384032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.384049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.391391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3060 00:36:57.484 [2024-12-16 12:58:23.392551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.392569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.400060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ebfd0 00:36:57.484 [2024-12-16 12:58:23.401055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.401073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.409552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198eff18 00:36:57.484 [2024-12-16 12:58:23.410657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.410674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.419074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ff3c8 00:36:57.484 [2024-12-16 12:58:23.420343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.420371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.428652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f7970 00:36:57.484 [2024-12-16 12:58:23.430140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.430157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.435293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e9e10 00:36:57.484 [2024-12-16 12:58:23.436079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.436096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.445417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f8a50 00:36:57.484 [2024-12-16 12:58:23.446220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.446237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.454496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e6738 00:36:57.484 [2024-12-16 12:58:23.455328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.464815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fb048 00:36:57.484 [2024-12-16 12:58:23.466171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.466188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.471503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f20d8 00:36:57.484 [2024-12-16 12:58:23.472077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.480738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f35f0 00:36:57.484 [2024-12-16 12:58:23.481369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.481386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.489587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f92c0 00:36:57.484 [2024-12-16 12:58:23.490190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.490207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.499061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198eaef0 00:36:57.484 [2024-12-16 12:58:23.499811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.499829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.508567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ea680 00:36:57.484 [2024-12-16 12:58:23.509461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.509479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.518153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f96f8 00:36:57.484 [2024-12-16 12:58:23.519122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.519140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:57.484 [2024-12-16 12:58:23.527630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ef270 00:36:57.484 [2024-12-16 12:58:23.528710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.484 [2024-12-16 12:58:23.528728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:57.485 [2024-12-16 12:58:23.537101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ee190 00:36:57.485 [2024-12-16 12:58:23.538320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.485 [2024-12-16 12:58:23.538338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:57.485 [2024-12-16 12:58:23.546698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3060 00:36:57.485 [2024-12-16 12:58:23.548071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.485 [2024-12-16 12:58:23.548092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:57.744 [2024-12-16 12:58:23.555329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198df988 00:36:57.744 [2024-12-16 12:58:23.556333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.744 [2024-12-16 12:58:23.556351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.564323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ff3c8 00:36:57.745 [2024-12-16 12:58:23.565336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.565353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.573408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e7818 00:36:57.745 [2024-12-16 12:58:23.574406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.574424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.582522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fb048 00:36:57.745 [2024-12-16 12:58:23.583552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.583571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.590964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fa3a0 00:36:57.745 [2024-12-16 12:58:23.591965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.591983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.600876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3498 00:36:57.745 [2024-12-16 12:58:23.602000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.602019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.610507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ef6a8 00:36:57.745 [2024-12-16 12:58:23.611753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.611770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.620174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e7818 00:36:57.745 [2024-12-16 12:58:23.621501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.621518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.628628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fda78 00:36:57.745 [2024-12-16 12:58:23.629631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.629652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.637625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f46d0 00:36:57.745 [2024-12-16 12:58:23.638624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.638643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.646707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f4f40 00:36:57.745 [2024-12-16 12:58:23.647726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.647744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.655853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f9b30 00:36:57.745 [2024-12-16 12:58:23.656839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.656856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.664997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198de470 00:36:57.745 [2024-12-16 12:58:23.665975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.665992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.674098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e8088 00:36:57.745 [2024-12-16 12:58:23.675118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.675136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.683193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e73e0 00:36:57.745 [2024-12-16 12:58:23.684185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.684203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.692294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e3060 00:36:57.745 [2024-12-16 12:58:23.693296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.693313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.702557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e1f80 00:36:57.745 [2024-12-16 12:58:23.704030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.704047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.709014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fda78 00:36:57.745 [2024-12-16 12:58:23.709619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.709637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.718303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ea680 00:36:57.745 [2024-12-16 12:58:23.718953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.718971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.727399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e1b48 00:36:57.745 [2024-12-16 12:58:23.728024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.728042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.736458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ed0b0 00:36:57.745 [2024-12-16 12:58:23.737082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.737099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.745549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198ef6a8 00:36:57.745 [2024-12-16 12:58:23.746186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.746204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.754683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fa7d8 00:36:57.745 [2024-12-16 12:58:23.755343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.755360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.763802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e99d8 00:36:57.745 [2024-12-16 12:58:23.764482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.764499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.772896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198fb8b8 00:36:57.745 [2024-12-16 12:58:23.773530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.773547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.782064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198f96f8 00:36:57.745 [2024-12-16 12:58:23.782733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.782752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.791183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198e38d0 00:36:57.745 [2024-12-16 12:58:23.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.791861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.745 [2024-12-16 12:58:23.801383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef460) with pdu=0x2000198eee38 00:36:57.745 [2024-12-16 12:58:23.802481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.745 [2024-12-16 12:58:23.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.005 27747.50 IOPS, 108.39 MiB/s 00:36:58.005 Latency(us) 00:36:58.005 [2024-12-16T11:58:24.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:58.005 nvme0n1 : 2.00 27756.09 108.42 0.00 0.00 4607.33 1755.43 12670.29 00:36:58.005 [2024-12-16T11:58:24.072Z] =================================================================================================================== 00:36:58.005 [2024-12-16T11:58:24.072Z] Total : 27756.09 108.42 0.00 0.00 4607.33 1755.43 12670.29 00:36:58.005 { 00:36:58.005 "results": [ 00:36:58.005 { 00:36:58.005 "job": "nvme0n1", 00:36:58.005 "core_mask": "0x2", 00:36:58.005 "workload": "randwrite", 00:36:58.005 "status": "finished", 00:36:58.005 "queue_depth": 128, 00:36:58.005 "io_size": 4096, 00:36:58.005 "runtime": 2.002768, 00:36:58.005 "iops": 27756.085577560658, 00:36:58.005 "mibps": 108.42220928734632, 00:36:58.005 "io_failed": 0, 00:36:58.005 "io_timeout": 0, 00:36:58.005 "avg_latency_us": 4607.328689386133, 00:36:58.005 "min_latency_us": 1755.4285714285713, 00:36:58.005 "max_latency_us": 12670.293333333333 00:36:58.005 } 00:36:58.005 ], 00:36:58.005 "core_count": 1 00:36:58.005 } 00:36:58.005 12:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:58.005 12:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:58.005 12:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:58.005 | .driver_specific 00:36:58.005 | .nvme_error 00:36:58.005 | .status_code 00:36:58.005 | .command_transient_transport_error' 00:36:58.005 12:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:58.005 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:36:58.006 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 574137 00:36:58.006 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 574137 ']' 00:36:58.006 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 574137 00:36:58.006 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:58.006 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.006 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 574137 00:36:58.265 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:58.265 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:58.265 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 574137' 00:36:58.265 killing process with pid 574137 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 574137 00:36:58.266 Received shutdown signal, test time was about 2.000000 seconds 00:36:58.266 00:36:58.266 Latency(us) 00:36:58.266 [2024-12-16T11:58:24.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.266 [2024-12-16T11:58:24.333Z] =================================================================================================================== 00:36:58.266 [2024-12-16T11:58:24.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 574137 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=574647 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 574647 /var/tmp/bperf.sock 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 574647 ']' 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:58.266 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.266 [2024-12-16 12:58:24.312505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:58.266 [2024-12-16 12:58:24.312553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574647 ] 00:36:58.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:58.266 Zero copy mechanism will not be used. 00:36:58.525 [2024-12-16 12:58:24.381764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.526 [2024-12-16 12:58:24.418100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.526 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:58.526 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:58.526 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:58.526 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:58.785 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:58.785 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.785 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.785 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.785 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:58.785 12:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:59.045 nvme0n1 00:36:59.045 12:58:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:59.045 12:58:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.045 12:58:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.303 12:58:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.303 12:58:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:59.304 12:58:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:59.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:59.304 Zero copy mechanism will not be used. 00:36:59.304 Running I/O for 2 seconds... 00:36:59.304 [2024-12-16 12:58:25.207983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.208263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.208292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.214402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.214667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.214692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.221111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.221395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.221417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.227630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.227902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.227922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.234936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.235227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.235246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.242431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.242722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.242743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.249547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.249804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.249824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.256785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.257074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.257093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.264750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.265017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.265037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.272101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.272379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.272399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.278545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.278851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.284008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.284273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.284293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.289726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.289996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.290016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.295012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.295292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.300213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.300475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.300495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.305385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.305646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.305664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.310587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.310877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.310897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.315761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.316041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.320820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.321080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.321101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.326065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.326336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.326356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.330994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.331261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.331280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.336093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.336364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.336384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.341152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.341427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.341450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.346306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.346569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.346588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.351313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.351575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.351594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.356473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.356734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.356754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.361674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.361963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.361982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.304 [2024-12-16 12:58:25.366886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.304 [2024-12-16 12:58:25.367158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.304 [2024-12-16 12:58:25.367178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.372020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.372291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.372311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.377288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.377577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.377597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.382440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.382717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.382737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.387519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.387785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.387805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.392654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.392913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.392933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.397704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.397994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.398014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.402848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.403126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.403145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.408297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.408564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.408584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.413622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.413900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.413920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.419838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.419934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.419952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.426467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.426741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.432634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.432868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.432887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.440111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.440394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.440414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.447628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.447890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.447909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.454718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.454874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.454891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.462526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.462756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.462775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.469900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.564 [2024-12-16 12:58:25.470171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.564 [2024-12-16 12:58:25.470191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.564 [2024-12-16 12:58:25.477091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.477389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.477409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.484321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.484644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.484664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.491639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.492006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.492026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.499573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.499901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.499924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.506934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.507261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.507280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.513967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.514288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.514307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.521218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.521466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.521486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.526191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.526444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.526463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.530464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.534687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.534914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.534933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.539222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.539461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.539480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.545045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.545379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.545399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.550603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.550848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.550868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.556213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.556446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.556465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.562802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.563082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.563101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.568231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.568458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.568477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.572710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.572958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.572977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.577225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.577474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.577493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.581665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.581911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.581930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.586016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.586268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.586288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.591004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.591259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.591286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.596283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.596530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.596549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.601641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.601871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.601890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.607782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.608013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.608032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.614387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.614621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.614641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.620085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.620340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.620360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.565 [2024-12-16 12:58:25.625683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.565 [2024-12-16 12:58:25.625932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.565 [2024-12-16 12:58:25.625952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.630760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.631012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.631032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.636706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.636940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.636959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.640981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.641237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.641256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.645178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.645435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.645454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.649350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.649581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.649600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.653589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.653820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.653839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.657731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.657979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.657998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.661984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.662220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.662239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.666185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.666419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.666438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.670355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.670605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.670624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.674535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.674767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.674786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.678662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.678895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.678914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.682803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.683037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.683056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.686949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.687202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.687221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.691094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.691383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.691403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.695292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.695537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.695556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.699431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.699680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.699699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.703587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.703834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.703854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.707704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.707951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.707970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.711996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.712233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.712257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.717028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.717384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.717404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.723520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.723791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.723812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.729544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.729792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.729812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.735613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.735949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.735968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.741811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.742084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.742104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.748258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.748512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.827 [2024-12-16 12:58:25.748532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.827 [2024-12-16 12:58:25.754645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.827 [2024-12-16 12:58:25.754931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.754950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.760645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.760930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.760949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.767162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.767502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.767522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.773721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.774009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.774028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.780018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.780301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.780320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.786007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.786319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.786339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.792320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.792610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.792630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.798951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.799284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.799304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.805383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.805666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.805686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.810965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.811221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.811241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.815492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.815723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.815742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.819816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.820063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.820083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.824181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.824411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.824431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.828473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.828704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.828723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.832753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.832983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.833002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.837054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.837294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.837313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.841356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.841586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.841605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.845816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.846046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.846065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.850324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.850557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.850576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.855223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.855480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.855502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.860572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.860804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.860824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.866517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.866764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.866783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.872587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.872835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.872853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.879403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.879692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.879712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:59.828 [2024-12-16 12:58:25.886244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:36:59.828 [2024-12-16 12:58:25.886500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.828 [2024-12-16 12:58:25.886520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.891608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.891842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.891862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.896635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.896883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.896902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.902292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.902588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.902608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.908164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.908452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.908472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.914531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.914821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.919664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.919896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.919916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.924344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.924574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.924593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.929251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.929489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.929508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.934001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.934245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.939017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.939256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.939276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.943853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.944086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.948637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.948869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.948889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.953351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.953590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.957999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.958246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.958266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.962675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.962909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.962928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.968260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.968603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.968624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.974103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.974350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.974370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.978965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.979219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.979239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.983815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.984075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.984095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.988617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.988864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.988883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.993483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.993718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.993741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:25.998408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:25.998656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:25.998676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:26.003075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:26.003335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:26.003355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:26.007895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:26.008132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:26.008151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:26.012956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:26.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.090 [2024-12-16 12:58:26.013213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.090 [2024-12-16 12:58:26.017731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.090 [2024-12-16 12:58:26.017964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.017983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.022460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.022707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.022727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.027335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.027568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.027587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.032212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.032445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.032464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.037320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.037551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.037571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.042129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.042363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.042383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.046916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.047186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.051669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.051918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.051937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.056530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.056762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.056781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.061406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.061647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.061666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.066498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.066730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.066749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.071414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.071666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.071685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.076319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.076575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.076597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.081194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.081427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.081447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.086254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.086488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.086507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.091109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.091370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.091390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.096101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.096369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.096388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.101072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.101306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.101326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.105984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.106241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.106260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.110671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.110901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.110920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.115478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.115728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.115747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.120344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.120598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.120617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.125257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.125495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.125514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.130024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.130265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.130285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.135026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.135326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.135345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.141126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.141433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.141452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.146567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.146798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.146817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.091 [2024-12-16 12:58:26.152434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.091 [2024-12-16 12:58:26.152766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.091 [2024-12-16 12:58:26.152785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.352 [2024-12-16 12:58:26.158507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.352 [2024-12-16 12:58:26.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.352 [2024-12-16 12:58:26.158798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.352 [2024-12-16 12:58:26.164819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.352 [2024-12-16 12:58:26.165106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.165131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.171197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.171450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.171470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.177369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.177707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.177726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.183895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.184013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.184031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.190756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.190858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.190875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.195901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.195969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.195987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.201333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.201384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.201402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 5707.00 IOPS, 713.38 MiB/s [2024-12-16T11:58:26.420Z] [2024-12-16 12:58:26.207814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.207888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.207906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.214393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.214549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.214567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.220982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.221186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.221210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.227968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.228120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.228138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.235472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.235662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.235680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.242706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.242809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.242827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.249637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.249794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.249811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.256417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.256610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.256635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.263044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.263212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.263230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.269770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.269871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.269888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.274768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.274836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.274854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.279128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.279200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.279217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.283431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.283505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.287681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.287739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.287756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.292004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.292060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.292078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.296298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.296349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.296366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.300544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.300610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.300627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.304803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.304861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.304878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.309021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.309074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.309092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.313323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.313382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.313399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.353 [2024-12-16 12:58:26.317521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.353 [2024-12-16 12:58:26.317575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.353 [2024-12-16 12:58:26.317592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.321817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.321915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.321934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.326054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.326126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.326143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.330353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.330420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.330438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.335137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.335212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.335229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.340919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.341060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.347012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.347158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.347176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.352969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.353098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.353119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.359149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.359307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.359328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.365130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.365299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.365316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.371028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.371102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.371124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.375894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.376009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.376026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.380885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.380934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.380969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.385249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.385316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.385344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.389503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.389569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.389586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.394048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.394154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.394171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.399256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.399413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.399430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.405329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.405477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.405494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.354 [2024-12-16 12:58:26.411300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.354 [2024-12-16 12:58:26.411482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.354 [2024-12-16 12:58:26.411500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.615 [2024-12-16 12:58:26.417167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.615 [2024-12-16 12:58:26.417348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.615 [2024-12-16 12:58:26.417365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.615 [2024-12-16 12:58:26.423177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.615 [2024-12-16 12:58:26.423305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.615 [2024-12-16 12:58:26.423323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.615 [2024-12-16 12:58:26.429260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.615 [2024-12-16 12:58:26.429396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.615 [2024-12-16 12:58:26.429412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.615 [2024-12-16 12:58:26.435477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.615 [2024-12-16 12:58:26.435590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.615 [2024-12-16 12:58:26.435606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.615 [2024-12-16 12:58:26.441721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.615 [2024-12-16 12:58:26.441865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.615 [2024-12-16 12:58:26.441882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.447926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.448109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.448131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.454098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.454272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.454290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.460211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.460397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.460415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.466079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.466224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.472261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.472404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.478369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.478512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.478530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.484668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.484835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.484853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.490648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.490767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.490784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.495958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.496055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.502144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.502243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.502261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.508089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.508183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.508205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.513168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.513241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.513258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.519406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.519519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.519536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.525858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.525929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.525947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.531566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.531693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.538024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.538101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.538124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.543168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.543217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.543234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.547815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.547864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.547880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.552758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.552826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.552843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.557353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.557406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.557423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.561889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.561982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.561999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.566851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.566900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.566917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.571728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.571781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.571797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.576711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.576781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.576798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.582014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.582068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.582086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.586656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.586709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.586725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.616 [2024-12-16 12:58:26.591154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.616 [2024-12-16 12:58:26.591206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.616 [2024-12-16 12:58:26.591223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.595422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.595490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.595507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.599835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.599888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.599905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.604294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.604361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.604388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.608794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.608873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.608891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.613212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.613286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.613303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.617598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.617656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.622149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.622206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.622224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.626677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.626742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.626759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.631186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.631285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.631302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.635682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.635749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.635770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.640165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.640238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.640256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.644578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.644633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.644650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.648792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.648843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.648860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.652947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.653002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.653018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.657133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.657194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.657211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.661375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.661433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.661449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.665541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.665608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.665625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.669812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.669877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.669895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.674080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.674163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.674180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.617 [2024-12-16 12:58:26.678449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.617 [2024-12-16 12:58:26.678518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.617 [2024-12-16 12:58:26.678535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.683610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.683714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.683732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.688609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.688699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.688716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.693785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.693838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.693854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.698337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.698391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.698408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.702813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.702866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.702884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.707288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.707354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.707372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.711636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.711710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.711727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.715874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.715984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.716002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.720337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.720388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.724982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.725089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.725105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.729967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.730018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.730035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.734906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.878 [2024-12-16 12:58:26.734958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.878 [2024-12-16 12:58:26.734975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.878 [2024-12-16 12:58:26.739378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.739473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.739490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.744274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.744333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.744351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.749420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.749496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.749513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.754029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.754095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.754121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.758470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.758553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.762822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.762896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.762913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.767128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.767201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.767218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.771417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.771484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.771501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.775686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.775752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.775769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.779996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.780063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.780081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.784261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.784328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.784346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.788499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.788549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.792704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.792764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.792781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.797044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.797093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.797111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.801248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.801302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.801319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.805382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.805431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.805448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.809564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.809614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.809631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.813691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.813743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.813760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.817893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.817954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.817970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.822150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.822202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.822219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.826344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.826394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.826411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.830554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.830605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.830622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.834713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.834776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.834793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.839291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.839391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.839408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.844588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.844638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.844656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.850325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.850420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.850437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.857096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.857277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.879 [2024-12-16 12:58:26.857294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.879 [2024-12-16 12:58:26.864034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.879 [2024-12-16 12:58:26.864101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.870750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.870846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.870864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.876914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.876988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.877008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.882484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.882551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.882569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.887367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.887420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.887436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.892591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.892642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.892659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.898245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.898352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.898369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.903450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.903513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.903530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.908066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.908126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.908144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.912578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.912633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.912651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.917948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.918006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.918023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.923460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.923516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.923532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.927600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.927672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.927689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.931591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.931645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.931661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.935530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.935582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.935598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.880 [2024-12-16 12:58:26.939518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:00.880 [2024-12-16 12:58:26.939583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.880 [2024-12-16 12:58:26.939600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.943436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.943507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.943524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.947073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.947159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.947191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.950722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.950797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.950814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.954448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.954517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.954535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.958106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.958184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.958201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.961782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.961854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.961871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.965468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.965539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.965557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.969136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.969206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.969224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.972977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.973043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.973061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.976703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.976774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.976792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.980341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.980414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.980432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.983961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.984032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.984050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.987622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.987703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.987725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.991246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.991323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.991342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.994888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.994960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.994978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:26.998518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:26.998605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:26.998623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:27.002211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:27.002294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.141 [2024-12-16 12:58:27.002321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.141 [2024-12-16 12:58:27.005802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.141 [2024-12-16 12:58:27.005874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.005891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.009338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.009449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.009466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.013286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.013387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.013404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.018291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.018454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.018471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.023519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.023707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.023725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.028932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.029123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.029141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.033981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.034190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.034208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.039219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.039393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.039410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.045047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.045176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.045194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.051558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.051738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.051755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.057568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.057714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.057732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.064541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.064706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.064723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.071003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.071171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.071188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.077385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.077531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.077549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.084474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.084659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.084677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.090855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.091015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.091032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.097621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.097790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.097808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.104214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.104337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.104355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.109628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.109699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.109717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.115663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.115794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.115811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.121361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.121469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.121486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.127307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.127436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.127457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.133208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.133296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.133313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.139189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.139296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.139313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.145164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.145260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.145276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.150971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.151068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.151085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.156667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.156773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.156793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.162136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.142 [2024-12-16 12:58:27.162231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.142 [2024-12-16 12:58:27.162249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.142 [2024-12-16 12:58:27.166904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.167023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.172073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.172241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.172258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.177932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.178080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.178097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.183912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.184040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.184058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.189574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.189658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.189676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.195465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.195533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.199931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.200021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.200039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.143 [2024-12-16 12:58:27.204586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21ef7a0) with pdu=0x2000198fef90 00:37:01.143 [2024-12-16 12:58:27.204682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.143 [2024-12-16 12:58:27.204700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.402 5936.50 IOPS, 742.06 MiB/s 00:37:01.402 Latency(us) 00:37:01.402 [2024-12-16T11:58:27.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.403 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:01.403 nvme0n1 : 2.00 5935.12 741.89 0.00 0.00 2691.57 1685.21 7926.74 00:37:01.403 [2024-12-16T11:58:27.470Z] =================================================================================================================== 00:37:01.403 [2024-12-16T11:58:27.470Z] Total : 5935.12 741.89 0.00 0.00 2691.57 1685.21 7926.74 00:37:01.403 { 00:37:01.403 "results": [ 00:37:01.403 { 00:37:01.403 "job": "nvme0n1", 00:37:01.403 "core_mask": "0x2", 00:37:01.403 "workload": "randwrite", 00:37:01.403 "status": "finished", 00:37:01.403 "queue_depth": 16, 00:37:01.403 "io_size": 131072, 00:37:01.403 "runtime": 2.003835, 00:37:01.403 "iops": 5935.119408534136, 00:37:01.403 "mibps": 741.889926066767, 00:37:01.403 "io_failed": 0, 00:37:01.403 "io_timeout": 0, 00:37:01.403 "avg_latency_us": 2691.5685817587773, 00:37:01.403 "min_latency_us": 1685.2114285714285, 00:37:01.403 "max_latency_us": 7926.735238095238 00:37:01.403 } 00:37:01.403 ], 00:37:01.403 "core_count": 1 00:37:01.403 } 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:01.403 | .driver_specific 00:37:01.403 | .nvme_error 00:37:01.403 | .status_code 00:37:01.403 | .command_transient_transport_error' 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 574647 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 574647 ']' 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 574647 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:01.403 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 574647 00:37:01.662 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:01.662 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:01.662 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 574647' 00:37:01.662 killing process with pid 574647 00:37:01.662 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 574647 00:37:01.662 Received shutdown signal, test time was about 2.000000 seconds 00:37:01.662 00:37:01.662 Latency(us) 00:37:01.662 [2024-12-16T11:58:27.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.662 [2024-12-16T11:58:27.730Z] =================================================================================================================== 00:37:01.663 [2024-12-16T11:58:27.730Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 574647 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 572973 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 572973 ']' 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 572973 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 572973 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 572973' 00:37:01.663 killing process with pid 572973 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 572973 00:37:01.663 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 572973 00:37:01.922 00:37:01.922 real 0m14.183s 00:37:01.922 user 0m27.003s 00:37:01.922 sys 0m4.672s 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:01.922 ************************************ 00:37:01.922 END TEST nvmf_digest_error 00:37:01.922 ************************************ 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.922 rmmod nvme_tcp 00:37:01.922 rmmod nvme_fabrics 00:37:01.922 rmmod nvme_keyring 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 572973 ']' 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 572973 00:37:01.922 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 572973 ']' 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 572973 00:37:01.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (572973) - No such process 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 572973 is not found' 00:37:01.923 Process with pid 572973 is not found 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.923 12:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:04.463 00:37:04.463 real 0m36.476s 00:37:04.463 user 0m55.404s 00:37:04.463 sys 0m13.740s 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.463 ************************************ 00:37:04.463 END TEST nvmf_digest 00:37:04.463 ************************************ 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.463 ************************************ 00:37:04.463 START TEST nvmf_bdevperf 00:37:04.463 ************************************ 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:04.463 * Looking for test storage... 00:37:04.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.463 --rc genhtml_branch_coverage=1 00:37:04.463 --rc genhtml_function_coverage=1 00:37:04.463 --rc genhtml_legend=1 00:37:04.463 --rc geninfo_all_blocks=1 00:37:04.463 --rc geninfo_unexecuted_blocks=1 00:37:04.463 00:37:04.463 ' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.463 --rc genhtml_branch_coverage=1 00:37:04.463 --rc genhtml_function_coverage=1 00:37:04.463 --rc genhtml_legend=1 00:37:04.463 --rc geninfo_all_blocks=1 00:37:04.463 --rc geninfo_unexecuted_blocks=1 00:37:04.463 00:37:04.463 ' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.463 --rc genhtml_branch_coverage=1 00:37:04.463 --rc genhtml_function_coverage=1 00:37:04.463 --rc genhtml_legend=1 00:37:04.463 --rc geninfo_all_blocks=1 00:37:04.463 --rc geninfo_unexecuted_blocks=1 00:37:04.463 00:37:04.463 ' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.463 --rc genhtml_branch_coverage=1 00:37:04.463 --rc genhtml_function_coverage=1 00:37:04.463 --rc genhtml_legend=1 00:37:04.463 --rc geninfo_all_blocks=1 00:37:04.463 --rc geninfo_unexecuted_blocks=1 00:37:04.463 00:37:04.463 ' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:04.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:04.463 12:58:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:11.039 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:11.039 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:11.039 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:11.040 Found net devices under 0000:af:00.0: cvl_0_0 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:11.040 Found net devices under 0000:af:00.1: cvl_0_1 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.040 12:58:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:11.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:37:11.040 00:37:11.040 --- 10.0.0.2 ping statistics --- 00:37:11.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.040 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:37:11.040 00:37:11.040 --- 10.0.0.1 ping statistics --- 00:37:11.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.040 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=578692 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 578692 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 578692 ']' 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.040 [2024-12-16 12:58:36.234137] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:11.040 [2024-12-16 12:58:36.234189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.040 [2024-12-16 12:58:36.307942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:11.040 [2024-12-16 12:58:36.348179] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.040 [2024-12-16 12:58:36.348219] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.040 [2024-12-16 12:58:36.348226] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.040 [2024-12-16 12:58:36.348232] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.040 [2024-12-16 12:58:36.348238] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.040 [2024-12-16 12:58:36.348351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:11.040 [2024-12-16 12:58:36.348393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.040 [2024-12-16 12:58:36.348394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.040 [2024-12-16 12:58:36.477823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.040 Malloc0 00:37:11.040 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.041 [2024-12-16 12:58:36.539285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:11.041 { 00:37:11.041 "params": { 00:37:11.041 "name": "Nvme$subsystem", 00:37:11.041 "trtype": "$TEST_TRANSPORT", 00:37:11.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.041 "adrfam": "ipv4", 00:37:11.041 "trsvcid": "$NVMF_PORT", 00:37:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.041 "hdgst": ${hdgst:-false}, 00:37:11.041 "ddgst": ${ddgst:-false} 00:37:11.041 }, 00:37:11.041 "method": "bdev_nvme_attach_controller" 00:37:11.041 } 00:37:11.041 EOF 00:37:11.041 )") 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:37:11.041 12:58:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:11.041 "params": { 00:37:11.041 "name": "Nvme1", 00:37:11.041 "trtype": "tcp", 00:37:11.041 "traddr": "10.0.0.2", 00:37:11.041 "adrfam": "ipv4", 00:37:11.041 "trsvcid": "4420", 00:37:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:11.041 "hdgst": false, 00:37:11.041 "ddgst": false 00:37:11.041 }, 00:37:11.041 "method": "bdev_nvme_attach_controller" 00:37:11.041 }' 00:37:11.041 [2024-12-16 12:58:36.592632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:11.041 [2024-12-16 12:58:36.592673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578765 ] 00:37:11.041 [2024-12-16 12:58:36.661042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.041 [2024-12-16 12:58:36.699885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.041 Running I/O for 1 seconds... 00:37:11.979 11255.00 IOPS, 43.96 MiB/s 00:37:11.979 Latency(us) 00:37:11.979 [2024-12-16T11:58:38.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.979 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:11.979 Verification LBA range: start 0x0 length 0x4000 00:37:11.979 Nvme1n1 : 1.00 11343.22 44.31 0.00 0.00 11244.82 1022.05 13169.62 00:37:11.979 [2024-12-16T11:58:38.046Z] =================================================================================================================== 00:37:11.979 [2024-12-16T11:58:38.046Z] Total : 11343.22 44.31 0.00 0.00 11244.82 1022.05 13169.62 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=578988 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:11.979 { 00:37:11.979 "params": { 00:37:11.979 "name": "Nvme$subsystem", 00:37:11.979 "trtype": "$TEST_TRANSPORT", 00:37:11.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.979 "adrfam": "ipv4", 00:37:11.979 "trsvcid": "$NVMF_PORT", 00:37:11.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.979 "hdgst": ${hdgst:-false}, 00:37:11.979 "ddgst": ${ddgst:-false} 00:37:11.979 }, 00:37:11.979 "method": "bdev_nvme_attach_controller" 00:37:11.979 } 00:37:11.979 EOF 00:37:11.979 )") 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:37:11.979 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:37:12.238 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:37:12.238 12:58:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:12.238 "params": { 00:37:12.238 "name": "Nvme1", 00:37:12.238 "trtype": "tcp", 00:37:12.238 "traddr": "10.0.0.2", 00:37:12.238 "adrfam": "ipv4", 00:37:12.238 "trsvcid": "4420", 00:37:12.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:12.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:12.238 "hdgst": false, 00:37:12.238 "ddgst": false 00:37:12.238 }, 00:37:12.238 "method": "bdev_nvme_attach_controller" 00:37:12.238 }' 00:37:12.238 [2024-12-16 12:58:38.080413] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:12.238 [2024-12-16 12:58:38.080458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578988 ] 00:37:12.238 [2024-12-16 12:58:38.147178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.238 [2024-12-16 12:58:38.183876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.498 Running I/O for 15 seconds... 00:37:14.446 11458.00 IOPS, 44.76 MiB/s [2024-12-16T11:58:41.084Z] 11455.50 IOPS, 44.75 MiB/s [2024-12-16T11:58:41.084Z] 12:58:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 578692 00:37:15.017 12:58:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:15.017 [2024-12-16 12:58:41.048440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.017 [2024-12-16 12:58:41.048800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.017 [2024-12-16 12:58:41.048808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.048990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.048998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.018 [2024-12-16 12:58:41.049527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.018 [2024-12-16 12:58:41.049535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.019 [2024-12-16 12:58:41.049788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.049987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.049993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.019 [2024-12-16 12:58:41.050123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.019 [2024-12-16 12:58:41.050130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.020 [2024-12-16 12:58:41.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7280 is same with the state(6) to be set 00:37:15.020 [2024-12-16 12:58:41.050530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:15.020 [2024-12-16 12:58:41.050535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:15.020 [2024-12-16 12:58:41.050541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101800 len:8 PRP1 0x0 PRP2 0x0 00:37:15.020 [2024-12-16 12:58:41.050549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050591] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18b7280 was disconnected and freed. reset controller. 00:37:15.020 [2024-12-16 12:58:41.050635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:15.020 [2024-12-16 12:58:41.050644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:15.020 [2024-12-16 12:58:41.050658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:15.020 [2024-12-16 12:58:41.050672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:15.020 [2024-12-16 12:58:41.050687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.020 [2024-12-16 12:58:41.050694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.020 [2024-12-16 12:58:41.053437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.020 [2024-12-16 12:58:41.053464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.020 [2024-12-16 12:58:41.054060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.020 [2024-12-16 12:58:41.054076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.020 [2024-12-16 12:58:41.054084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.020 [2024-12-16 12:58:41.054261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.020 [2024-12-16 12:58:41.054434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.020 [2024-12-16 12:58:41.054442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.020 [2024-12-16 12:58:41.054450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.020 [2024-12-16 12:58:41.057195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.020 [2024-12-16 12:58:41.066619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.020 [2024-12-16 12:58:41.067044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.020 [2024-12-16 12:58:41.067061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.020 [2024-12-16 12:58:41.067069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.020 [2024-12-16 12:58:41.067244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.020 [2024-12-16 12:58:41.067413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.020 [2024-12-16 12:58:41.067421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.021 [2024-12-16 12:58:41.067427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.021 [2024-12-16 12:58:41.070023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.021 [2024-12-16 12:58:41.079559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.282 [2024-12-16 12:58:41.080028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.282 [2024-12-16 12:58:41.080073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.282 [2024-12-16 12:58:41.080098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.282 [2024-12-16 12:58:41.080620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.282 [2024-12-16 12:58:41.080789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.282 [2024-12-16 12:58:41.080797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.282 [2024-12-16 12:58:41.080804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.282 [2024-12-16 12:58:41.083538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.282 [2024-12-16 12:58:41.092352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.282 [2024-12-16 12:58:41.092753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.282 [2024-12-16 12:58:41.092768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.282 [2024-12-16 12:58:41.092775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.282 [2024-12-16 12:58:41.092933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.282 [2024-12-16 12:58:41.093091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.282 [2024-12-16 12:58:41.093099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.282 [2024-12-16 12:58:41.093105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.282 [2024-12-16 12:58:41.095722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.282 [2024-12-16 12:58:41.105192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.282 [2024-12-16 12:58:41.105598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.282 [2024-12-16 12:58:41.105642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.105666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.106262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.106833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.106842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.106848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.109448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.118045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.118465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.118481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.118488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.118655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.118822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.118830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.118836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.121444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.130769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.131141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.131157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.131167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.131341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.131500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.131508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.131513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.134099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.143557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.144004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.144048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.144072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.144668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.145000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.145010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.145017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.147618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.156327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.156707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.156722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.156729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.156895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.157061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.157070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.157076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.159681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.169076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.169456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.169501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.169525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.170082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.170480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.170504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.170518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.176750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.184146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.184603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.184624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.184634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.184887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.185154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.185167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.185177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.189232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.197224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.197652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.197668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.197675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.197846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.198018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.198026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.198032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.200791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.209949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.210363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.210380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.210386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.210553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.210719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.210727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.210734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.213415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.222737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.223146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.223191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.223215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.223792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.224226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.283 [2024-12-16 12:58:41.224234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.283 [2024-12-16 12:58:41.224241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.283 [2024-12-16 12:58:41.226836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.283 [2024-12-16 12:58:41.235443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.283 [2024-12-16 12:58:41.235878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-12-16 12:58:41.235921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.283 [2024-12-16 12:58:41.235943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.283 [2024-12-16 12:58:41.236536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.283 [2024-12-16 12:58:41.237049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.237057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.237064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.239662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.248214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.248628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.248643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.248649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.248807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.248964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.248972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.248978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.251591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.261047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.261395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.261411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.261419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.261588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.261754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.261762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.261768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.264414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.273853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.274291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.274307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.274314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.274481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.274647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.274655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.274661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.277266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.286607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.287025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.287041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.287048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.287232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.287399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.287408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.287414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.290013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.299355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.299800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.299817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.299824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.299995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.300173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.300181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.300192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.302927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.312301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.312719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.312735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.312743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.312914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.313085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.313093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.313100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.315837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.325392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.325824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.325839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.325847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.326018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.326198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.326207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.326213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.328915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.284 [2024-12-16 12:58:41.338299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.284 [2024-12-16 12:58:41.338703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-12-16 12:58:41.338719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.284 [2024-12-16 12:58:41.338726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.284 [2024-12-16 12:58:41.338893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.284 [2024-12-16 12:58:41.339059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.284 [2024-12-16 12:58:41.339068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.284 [2024-12-16 12:58:41.339074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.284 [2024-12-16 12:58:41.341802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.545 [2024-12-16 12:58:41.351213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.351552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.351567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.351574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.351741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.351907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.351916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.351922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.354572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.364044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.364406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.364422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.364429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.364595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.364761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.364770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.364776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.367382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.376936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.377351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.377367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.377374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.377541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.377707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.377714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.377721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.380379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.389709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.390145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.390191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.390214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.390793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.391395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.391422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.391448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.394049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.402482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.402946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.402990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.403014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.403543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.403711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.403719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.403725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.406327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.415189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.415643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.415686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.415709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.416306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.416465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.416473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.416479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.418996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.428005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.428365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.428381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.428388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.428555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.428720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.428729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.428735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.431336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.440758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.441119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.441135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.441143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.441309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.441475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.441483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.546 [2024-12-16 12:58:41.441489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.546 [2024-12-16 12:58:41.444087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.546 [2024-12-16 12:58:41.453536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.546 [2024-12-16 12:58:41.453859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.546 [2024-12-16 12:58:41.453874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.546 [2024-12-16 12:58:41.453881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.546 [2024-12-16 12:58:41.454039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.546 [2024-12-16 12:58:41.454222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.546 [2024-12-16 12:58:41.454230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.454237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.456831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.466323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.466739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.466783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.466806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.467399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.467838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.467846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.467851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.470368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.479121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.479485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.479501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.479510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.479680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.479847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.479855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.479861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.482409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.491945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.492359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.492375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.492382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.492548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.492715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.492723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.492729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.495338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.504655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.505047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.505062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.505068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.505252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.505423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.505431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.505437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 9660.67 IOPS, 37.74 MiB/s [2024-12-16T11:58:41.614Z] [2024-12-16 12:58:41.509194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.517353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.517761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.517775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.517782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.517939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.518100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.518108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.518121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.520731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.530069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.530394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.530410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.530417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.530574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.530731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.530739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.530745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.533259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.542893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.543314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.543330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.543336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.547 [2024-12-16 12:58:41.543494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.547 [2024-12-16 12:58:41.543652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.547 [2024-12-16 12:58:41.543660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.547 [2024-12-16 12:58:41.543666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.547 [2024-12-16 12:58:41.546260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.547 [2024-12-16 12:58:41.555701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.547 [2024-12-16 12:58:41.556133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.547 [2024-12-16 12:58:41.556149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.547 [2024-12-16 12:58:41.556157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.548 [2024-12-16 12:58:41.556323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.548 [2024-12-16 12:58:41.556489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.548 [2024-12-16 12:58:41.556497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.548 [2024-12-16 12:58:41.556504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.548 [2024-12-16 12:58:41.559245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.548 [2024-12-16 12:58:41.568730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.548 [2024-12-16 12:58:41.569165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.548 [2024-12-16 12:58:41.569181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.548 [2024-12-16 12:58:41.569188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.548 [2024-12-16 12:58:41.569369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.548 [2024-12-16 12:58:41.569536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.548 [2024-12-16 12:58:41.569543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.548 [2024-12-16 12:58:41.569550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.548 [2024-12-16 12:58:41.572215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.548 [2024-12-16 12:58:41.581603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.548 [2024-12-16 12:58:41.582043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.548 [2024-12-16 12:58:41.582081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.548 [2024-12-16 12:58:41.582106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.548 [2024-12-16 12:58:41.582701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.548 [2024-12-16 12:58:41.582931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.548 [2024-12-16 12:58:41.582939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.548 [2024-12-16 12:58:41.582945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.548 [2024-12-16 12:58:41.585586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.548 [2024-12-16 12:58:41.594406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.548 [2024-12-16 12:58:41.594823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.548 [2024-12-16 12:58:41.594839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.548 [2024-12-16 12:58:41.594846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.548 [2024-12-16 12:58:41.595012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.548 [2024-12-16 12:58:41.595185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.548 [2024-12-16 12:58:41.595194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.548 [2024-12-16 12:58:41.595200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.548 [2024-12-16 12:58:41.597797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.548 [2024-12-16 12:58:41.607307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.548 [2024-12-16 12:58:41.607660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.548 [2024-12-16 12:58:41.607676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.548 [2024-12-16 12:58:41.607686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.548 [2024-12-16 12:58:41.607854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.548 [2024-12-16 12:58:41.608020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.548 [2024-12-16 12:58:41.608028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.548 [2024-12-16 12:58:41.608035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.809 [2024-12-16 12:58:41.610721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.809 [2024-12-16 12:58:41.620181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.809 [2024-12-16 12:58:41.620592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.809 [2024-12-16 12:58:41.620608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.809 [2024-12-16 12:58:41.620615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.809 [2024-12-16 12:58:41.620781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.809 [2024-12-16 12:58:41.620948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.809 [2024-12-16 12:58:41.620956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.809 [2024-12-16 12:58:41.620962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.809 [2024-12-16 12:58:41.623570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.809 [2024-12-16 12:58:41.633105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.809 [2024-12-16 12:58:41.633368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.809 [2024-12-16 12:58:41.633383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.809 [2024-12-16 12:58:41.633390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.809 [2024-12-16 12:58:41.633557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.809 [2024-12-16 12:58:41.633724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.633733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.633740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.636345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.645962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.646333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.646349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.646356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.646523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.646689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.646701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.646707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.649406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.658933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.659278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.659294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.659302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.659473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.659644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.659653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.659659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.662405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.671681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.672017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.672032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.672039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.672210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.672377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.672386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.672392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.675034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.684443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.684792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.684808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.684815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.684981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.685153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.685162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.685168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.687778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.697229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.697598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.697613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.697621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.697787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.697954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.697962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.697968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.700583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.710061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.710363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.710408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.710431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.711009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.711227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.711237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.711243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.713835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.722781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.723123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.723139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.723146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.723312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.723478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.723486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.723492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.726092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.735572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.735932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.735948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.735955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.736131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.736297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.736306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.736312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.738907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.748349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.748644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.748660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.748667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.748833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.748999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.749007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.749013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.751617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.761161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.761447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.810 [2024-12-16 12:58:41.761463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.810 [2024-12-16 12:58:41.761470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.810 [2024-12-16 12:58:41.761636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.810 [2024-12-16 12:58:41.761802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.810 [2024-12-16 12:58:41.761810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.810 [2024-12-16 12:58:41.761816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.810 [2024-12-16 12:58:41.764419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.810 [2024-12-16 12:58:41.773903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.810 [2024-12-16 12:58:41.774361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.774378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.774385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.774551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.774717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.774725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.774734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.777337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.786758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.787123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.787140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.787147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.787314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.787480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.787488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.787494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.790096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.799629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.799978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.799994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.800001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.800173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.800347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.800356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.800363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.802968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.812454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.812808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.812825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.812832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.813005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.813185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.813194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.813200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.815941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.825510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.825862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.825878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.825885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.826056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.826233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.826241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.826248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.828960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.838511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.838904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.838920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.838927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.839093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.839267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.839276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.839282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.841918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.851304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.851697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.851713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.851720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.851886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.852053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.852061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.852068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.854670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.811 [2024-12-16 12:58:41.864144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.811 [2024-12-16 12:58:41.864484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.811 [2024-12-16 12:58:41.864500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:15.811 [2024-12-16 12:58:41.864507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:15.811 [2024-12-16 12:58:41.864674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:15.811 [2024-12-16 12:58:41.864846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.811 [2024-12-16 12:58:41.864854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.811 [2024-12-16 12:58:41.864860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.811 [2024-12-16 12:58:41.867500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.072 [2024-12-16 12:58:41.877118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.072 [2024-12-16 12:58:41.877492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.072 [2024-12-16 12:58:41.877507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.072 [2024-12-16 12:58:41.877514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.072 [2024-12-16 12:58:41.877681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.072 [2024-12-16 12:58:41.877863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.072 [2024-12-16 12:58:41.877871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.072 [2024-12-16 12:58:41.877877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.072 [2024-12-16 12:58:41.880622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.072 [2024-12-16 12:58:41.889935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.072 [2024-12-16 12:58:41.890301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.072 [2024-12-16 12:58:41.890318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.072 [2024-12-16 12:58:41.890325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.072 [2024-12-16 12:58:41.890491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.072 [2024-12-16 12:58:41.890658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.072 [2024-12-16 12:58:41.890666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.072 [2024-12-16 12:58:41.890672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.072 [2024-12-16 12:58:41.893295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.072 [2024-12-16 12:58:41.902771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.072 [2024-12-16 12:58:41.903065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.072 [2024-12-16 12:58:41.903081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.072 [2024-12-16 12:58:41.903088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.072 [2024-12-16 12:58:41.903260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.072 [2024-12-16 12:58:41.903427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.072 [2024-12-16 12:58:41.903435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.072 [2024-12-16 12:58:41.903441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.072 [2024-12-16 12:58:41.906042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.072 [2024-12-16 12:58:41.915488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.072 [2024-12-16 12:58:41.915847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.072 [2024-12-16 12:58:41.915882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.072 [2024-12-16 12:58:41.915907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.072 [2024-12-16 12:58:41.916498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.072 [2024-12-16 12:58:41.916691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.072 [2024-12-16 12:58:41.916699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.072 [2024-12-16 12:58:41.916705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.072 [2024-12-16 12:58:41.919306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.072 [2024-12-16 12:58:41.928333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.072 [2024-12-16 12:58:41.928753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.072 [2024-12-16 12:58:41.928769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.072 [2024-12-16 12:58:41.928776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.072 [2024-12-16 12:58:41.928942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.072 [2024-12-16 12:58:41.929108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.072 [2024-12-16 12:58:41.929122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.072 [2024-12-16 12:58:41.929129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:41.931730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:41.941036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:41.941393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:41.941409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:41.941416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:41.941584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:41.941749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:41.941757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:41.941763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:41.944368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:41.953825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:41.954121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:41.954139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:41.954149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:41.954316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:41.954482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:41.954490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:41.954496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:41.957094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:41.966602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:41.966879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:41.966895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:41.966902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:41.967068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:41.967239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:41.967248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:41.967254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:41.969853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:41.979415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:41.979732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:41.979748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:41.979755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:41.979924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:41.980091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:41.980099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:41.980106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:41.982707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:41.992321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:41.992787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:41.992803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:41.992810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:41.992976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:41.993152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:41.993161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:41.993167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:41.995805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:42.005101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:42.005453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:42.005498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:42.005522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:42.005986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:42.006160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:42.006169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:42.006176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:42.008777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:42.018046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:42.018339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:42.018355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:42.018362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:42.018529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:42.018695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:42.018703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:42.018709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:42.021313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:42.030787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:42.031071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:42.031087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:42.031094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:42.031267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:42.031434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:42.031442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:42.031449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:42.034095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:42.043581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:42.043924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:42.043939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:42.043947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:42.044118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:42.044285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:42.044294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:42.044300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:42.046899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.073 [2024-12-16 12:58:42.056374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.073 [2024-12-16 12:58:42.056783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.073 [2024-12-16 12:58:42.056798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.073 [2024-12-16 12:58:42.056805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.073 [2024-12-16 12:58:42.056970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.073 [2024-12-16 12:58:42.057142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.073 [2024-12-16 12:58:42.057151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.073 [2024-12-16 12:58:42.057157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.073 [2024-12-16 12:58:42.059821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.074 [2024-12-16 12:58:42.069288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.074 [2024-12-16 12:58:42.069720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.074 [2024-12-16 12:58:42.069737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.074 [2024-12-16 12:58:42.069744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.074 [2024-12-16 12:58:42.069916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.074 [2024-12-16 12:58:42.070088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.074 [2024-12-16 12:58:42.070096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.074 [2024-12-16 12:58:42.070102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.074 [2024-12-16 12:58:42.072851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.074 [2024-12-16 12:58:42.082201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.074 [2024-12-16 12:58:42.082650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.074 [2024-12-16 12:58:42.082666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.074 [2024-12-16 12:58:42.082676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.074 [2024-12-16 12:58:42.082848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.074 [2024-12-16 12:58:42.083019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.074 [2024-12-16 12:58:42.083027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.074 [2024-12-16 12:58:42.083034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.074 [2024-12-16 12:58:42.085728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.074 [2024-12-16 12:58:42.095036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.074 [2024-12-16 12:58:42.095400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.074 [2024-12-16 12:58:42.095416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.074 [2024-12-16 12:58:42.095423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.074 [2024-12-16 12:58:42.095589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.074 [2024-12-16 12:58:42.095756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.074 [2024-12-16 12:58:42.095764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.074 [2024-12-16 12:58:42.095770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.074 [2024-12-16 12:58:42.098373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.074 [2024-12-16 12:58:42.107845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.074 [2024-12-16 12:58:42.108195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.074 [2024-12-16 12:58:42.108211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.074 [2024-12-16 12:58:42.108219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.074 [2024-12-16 12:58:42.108386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.074 [2024-12-16 12:58:42.108552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.074 [2024-12-16 12:58:42.108560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.074 [2024-12-16 12:58:42.108566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.074 [2024-12-16 12:58:42.111175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.074 [2024-12-16 12:58:42.120575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.074 [2024-12-16 12:58:42.120984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.074 [2024-12-16 12:58:42.120999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.074 [2024-12-16 12:58:42.121005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.074 [2024-12-16 12:58:42.121186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.074 [2024-12-16 12:58:42.121353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.074 [2024-12-16 12:58:42.121364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.074 [2024-12-16 12:58:42.121370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.074 [2024-12-16 12:58:42.123969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.074 [2024-12-16 12:58:42.133467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.074 [2024-12-16 12:58:42.133887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.074 [2024-12-16 12:58:42.133902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.074 [2024-12-16 12:58:42.133909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.074 [2024-12-16 12:58:42.134075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.074 [2024-12-16 12:58:42.134248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.074 [2024-12-16 12:58:42.134257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.074 [2024-12-16 12:58:42.134263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.335 [2024-12-16 12:58:42.136923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.335 [2024-12-16 12:58:42.146315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.335 [2024-12-16 12:58:42.146704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.335 [2024-12-16 12:58:42.146719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.335 [2024-12-16 12:58:42.146727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.335 [2024-12-16 12:58:42.146884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.335 [2024-12-16 12:58:42.147042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.335 [2024-12-16 12:58:42.147050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.335 [2024-12-16 12:58:42.147056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.335 [2024-12-16 12:58:42.149675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.335 [2024-12-16 12:58:42.159134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.335 [2024-12-16 12:58:42.159552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.335 [2024-12-16 12:58:42.159567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.335 [2024-12-16 12:58:42.159574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.335 [2024-12-16 12:58:42.159732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.335 [2024-12-16 12:58:42.159889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.335 [2024-12-16 12:58:42.159896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.335 [2024-12-16 12:58:42.159902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.335 [2024-12-16 12:58:42.162426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.335 [2024-12-16 12:58:42.171933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.335 [2024-12-16 12:58:42.172329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.335 [2024-12-16 12:58:42.172345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.335 [2024-12-16 12:58:42.172353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.335 [2024-12-16 12:58:42.172519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.335 [2024-12-16 12:58:42.172686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.335 [2024-12-16 12:58:42.172694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.335 [2024-12-16 12:58:42.172700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.335 [2024-12-16 12:58:42.175323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.335 [2024-12-16 12:58:42.184775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.335 [2024-12-16 12:58:42.185203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.335 [2024-12-16 12:58:42.185247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.335 [2024-12-16 12:58:42.185270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.335 [2024-12-16 12:58:42.185847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.335 [2024-12-16 12:58:42.186453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.335 [2024-12-16 12:58:42.186480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.186501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.189156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.197510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.197924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.197939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.197945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.198103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.198290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.198298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.198304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.200907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.210279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.210707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.210750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.210773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.211252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.211411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.211419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.211425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.213944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.223096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.223512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.223528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.223534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.223692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.223850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.223857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.223863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.226386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.235888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.236307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.236324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.236331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.236501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.236658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.236666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.236672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.239269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.248739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.249177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.249193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.249200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.249367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.249533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.249541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.249551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.252162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.261554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.261966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.261981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.261988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.262167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.262334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.262342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.262349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.264950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.274262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.274668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.274683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.274690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.274848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.275005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.275013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.275019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.277647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.287066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.287409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.287454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.287478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.288057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.288655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.288682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.288689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.291316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.299775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.300197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.300251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.300275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.300509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.300667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.300675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.300680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.303287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.312508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.312932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.336 [2024-12-16 12:58:42.312976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.336 [2024-12-16 12:58:42.312999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.336 [2024-12-16 12:58:42.313584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.336 [2024-12-16 12:58:42.313937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.336 [2024-12-16 12:58:42.313954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.336 [2024-12-16 12:58:42.313969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.336 [2024-12-16 12:58:42.320194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.336 [2024-12-16 12:58:42.327471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.336 [2024-12-16 12:58:42.327989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.337 [2024-12-16 12:58:42.328010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.337 [2024-12-16 12:58:42.328021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.337 [2024-12-16 12:58:42.328279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.337 [2024-12-16 12:58:42.328534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.337 [2024-12-16 12:58:42.328545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.337 [2024-12-16 12:58:42.328555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.337 [2024-12-16 12:58:42.332607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.337 [2024-12-16 12:58:42.340490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.337 [2024-12-16 12:58:42.340829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.337 [2024-12-16 12:58:42.340846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.337 [2024-12-16 12:58:42.340853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.337 [2024-12-16 12:58:42.341025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.337 [2024-12-16 12:58:42.341208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.337 [2024-12-16 12:58:42.341217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.337 [2024-12-16 12:58:42.341223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.337 [2024-12-16 12:58:42.343966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.337 [2024-12-16 12:58:42.353279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.337 [2024-12-16 12:58:42.353693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.337 [2024-12-16 12:58:42.353709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.337 [2024-12-16 12:58:42.353716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.337 [2024-12-16 12:58:42.353882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.337 [2024-12-16 12:58:42.354049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.337 [2024-12-16 12:58:42.354056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.337 [2024-12-16 12:58:42.354063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.337 [2024-12-16 12:58:42.356664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.337 [2024-12-16 12:58:42.366133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.337 [2024-12-16 12:58:42.366549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.337 [2024-12-16 12:58:42.366592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.337 [2024-12-16 12:58:42.366616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.337 [2024-12-16 12:58:42.367096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.337 [2024-12-16 12:58:42.367269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.337 [2024-12-16 12:58:42.367277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.337 [2024-12-16 12:58:42.367283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.337 [2024-12-16 12:58:42.369888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.337 [2024-12-16 12:58:42.379004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.337 [2024-12-16 12:58:42.379362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.337 [2024-12-16 12:58:42.379406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.337 [2024-12-16 12:58:42.379428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.337 [2024-12-16 12:58:42.379986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.337 [2024-12-16 12:58:42.380158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.337 [2024-12-16 12:58:42.380169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.337 [2024-12-16 12:58:42.380178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.337 [2024-12-16 12:58:42.382850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.337 [2024-12-16 12:58:42.391841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.337 [2024-12-16 12:58:42.392250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.337 [2024-12-16 12:58:42.392267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.337 [2024-12-16 12:58:42.392274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.337 [2024-12-16 12:58:42.392440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.337 [2024-12-16 12:58:42.392606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.337 [2024-12-16 12:58:42.392614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.337 [2024-12-16 12:58:42.392620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.337 [2024-12-16 12:58:42.395290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.598 [2024-12-16 12:58:42.404674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.598 [2024-12-16 12:58:42.404935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.598 [2024-12-16 12:58:42.404951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.598 [2024-12-16 12:58:42.404959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.598 [2024-12-16 12:58:42.405130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.598 [2024-12-16 12:58:42.405297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.598 [2024-12-16 12:58:42.405306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.598 [2024-12-16 12:58:42.405313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.598 [2024-12-16 12:58:42.407924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.598 [2024-12-16 12:58:42.417440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.598 [2024-12-16 12:58:42.417857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.598 [2024-12-16 12:58:42.417873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.598 [2024-12-16 12:58:42.417880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.598 [2024-12-16 12:58:42.418046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.598 [2024-12-16 12:58:42.418218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.598 [2024-12-16 12:58:42.418226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.598 [2024-12-16 12:58:42.418232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.598 [2024-12-16 12:58:42.420854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.598 [2024-12-16 12:58:42.430180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.598 [2024-12-16 12:58:42.430610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.598 [2024-12-16 12:58:42.430625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.598 [2024-12-16 12:58:42.430636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.598 [2024-12-16 12:58:42.430803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.598 [2024-12-16 12:58:42.430969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.598 [2024-12-16 12:58:42.430977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.598 [2024-12-16 12:58:42.430983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.598 [2024-12-16 12:58:42.433654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.598 [2024-12-16 12:58:42.442959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.598 [2024-12-16 12:58:42.443388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.443404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.443411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.443578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.443744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.443752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.443759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.446369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.455760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.456187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.456203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.456210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.456376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.456542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.456551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.456557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.459221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.468653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.469063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.469078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.469085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.469257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.469424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.469435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.469441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.472045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.481457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.481871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.481886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.481892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.482050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.482232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.482241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.482247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.484853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.494284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.494724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.494766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.494789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.495214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.495382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.495390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.495396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.497999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.507027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.507455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.507499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.507522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.508037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.508214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.508223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.508229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 7245.50 IOPS, 28.30 MiB/s [2024-12-16T11:58:42.666Z] [2024-12-16 12:58:42.511979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.519904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.520327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.520344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.520351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.520517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.520684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.520692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.520698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.523297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.532615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.533021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.533065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.533087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.533680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.534271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.534298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.534320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.536939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.545366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.545773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.545789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.545796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.545963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.546136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.546144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.546151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.548771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.558076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.558464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.558480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.558489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.558647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.558804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.599 [2024-12-16 12:58:42.558812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.599 [2024-12-16 12:58:42.558818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.599 [2024-12-16 12:58:42.561424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.599 [2024-12-16 12:58:42.570918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.599 [2024-12-16 12:58:42.571361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.599 [2024-12-16 12:58:42.571377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.599 [2024-12-16 12:58:42.571385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.599 [2024-12-16 12:58:42.571556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.599 [2024-12-16 12:58:42.571727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.571735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.571741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.574489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.583856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.584239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.584255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.584262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.584440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.584607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.584615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.584621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.587285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.596816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.597230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.597272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.597295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.597874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.598452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.598464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.598470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.601070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.609649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.610063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.610079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.610087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.610260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.610427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.610435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.610441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.613038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.622446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.622857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.622872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.622879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.623046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.623219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.623228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.623234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.625830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.635389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.635797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.635812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.635819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.635986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.636160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.636169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.636175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.638772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.648237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.648634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.648675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.648698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.649241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.649400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.649408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.649414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.600 [2024-12-16 12:58:42.651933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.600 [2024-12-16 12:58:42.661216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.600 [2024-12-16 12:58:42.661631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.600 [2024-12-16 12:58:42.661647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.600 [2024-12-16 12:58:42.661654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.600 [2024-12-16 12:58:42.661820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.600 [2024-12-16 12:58:42.661986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.600 [2024-12-16 12:58:42.661994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.600 [2024-12-16 12:58:42.662000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.861 [2024-12-16 12:58:42.664672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.861 [2024-12-16 12:58:42.674056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.861 [2024-12-16 12:58:42.674467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-12-16 12:58:42.674483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.861 [2024-12-16 12:58:42.674489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.861 [2024-12-16 12:58:42.674656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.861 [2024-12-16 12:58:42.674823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.674831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.674837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.677496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.686810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.687128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.687144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.687150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.687311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.687469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.687477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.687483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.690011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.699560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.699950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.699965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.699972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.700136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.700319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.700328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.700334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.702935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.712408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.712821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.712836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.712843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.713010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.713184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.713192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.713199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.715834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.725232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.725616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.725632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.725639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.725796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.725954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.725962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.725970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.728586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.738044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.738457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.738473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.738480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.738647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.738813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.738821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.738827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.741432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.750897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.751307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.751323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.751330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.751497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.751663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.751671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.751677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.754284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.763748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.764124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.764139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.764146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.764304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.764461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.764469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.764475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.767060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.776622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.777022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.777040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.777047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.777221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.777389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.777397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.777403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.780120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.789378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.789746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.789762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.789769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.789926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.790083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.862 [2024-12-16 12:58:42.790091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.862 [2024-12-16 12:58:42.790097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.862 [2024-12-16 12:58:42.792751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.862 [2024-12-16 12:58:42.802209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.862 [2024-12-16 12:58:42.802623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-12-16 12:58:42.802665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.862 [2024-12-16 12:58:42.802688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.862 [2024-12-16 12:58:42.803280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.862 [2024-12-16 12:58:42.803881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.803917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.803924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.806522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.814998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.815416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.815460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.815483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.815994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.816170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.816179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.816185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.818835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.828053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.828467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.828483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.828490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.828661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.828832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.828840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.828846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.831591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.840969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.841387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.841432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.841456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.841972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.842144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.842153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.842159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.844817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.853932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.854348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.854364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.854372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.854543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.854714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.854722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.854729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.857425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.866723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.867112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.867132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.867139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.867306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.867472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.867480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.867486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.870125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.879570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.879954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.879969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.879976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.880141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.880324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.880332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.880338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.882960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.892276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.892694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.892736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.892759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.893258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.893426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.893434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.893440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.896035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.905116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.905545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.905591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.905623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.906164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.906332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.906340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.906347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.908950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.863 [2024-12-16 12:58:42.917953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.863 [2024-12-16 12:58:42.918367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-12-16 12:58:42.918384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:16.863 [2024-12-16 12:58:42.918391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:16.863 [2024-12-16 12:58:42.918558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:16.863 [2024-12-16 12:58:42.918724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.863 [2024-12-16 12:58:42.918732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.863 [2024-12-16 12:58:42.918739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.863 [2024-12-16 12:58:42.921410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:42.930732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:42.931163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:42.931179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:42.931187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:42.931353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:42.931521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:42.931529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:42.931536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:42.934165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:42.943462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:42.943771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:42.943786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:42.943793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:42.943950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:42.944108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:42.944126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:42.944132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:42.946745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:42.956299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:42.956681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:42.956696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:42.956703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:42.956860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:42.957018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:42.957025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:42.957031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:42.959649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:42.969045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:42.969463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:42.969479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:42.969486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:42.969652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:42.969818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:42.969826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:42.969832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:42.972550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:42.981824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:42.982245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:42.982262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:42.982269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:42.982436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:42.982602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:42.982610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:42.982616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:42.985281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:42.994693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:42.995142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:42.995187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:42.995210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:42.995790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:42.996385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:42.996412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:42.996434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:42.999052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:43.007471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:43.007857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:43.007873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:43.007880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:43.008038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:43.008221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:43.008230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:43.008236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:43.010837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:43.020388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:43.020799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:43.020814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:43.020821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:43.020987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:43.021160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:43.021169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:43.021176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:43.023776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:43.033138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:43.033529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:43.033573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:43.033603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:43.034196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:43.034647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.125 [2024-12-16 12:58:43.034654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.125 [2024-12-16 12:58:43.034660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.125 [2024-12-16 12:58:43.037262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.125 [2024-12-16 12:58:43.045978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.125 [2024-12-16 12:58:43.046411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.125 [2024-12-16 12:58:43.046427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.125 [2024-12-16 12:58:43.046434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.125 [2024-12-16 12:58:43.046601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.125 [2024-12-16 12:58:43.046767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.046776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.046782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.049393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.058726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.059162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.059178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.059184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.059357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.059515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.059524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.059530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.062176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.071607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.072002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.072047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.072070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.072636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.073025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.073049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.073065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.079325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.086651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.087152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.087174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.087184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.087437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.087693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.087705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.087714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.091884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.099740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.100091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.100107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.100120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.100292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.100470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.100479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.100485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.103233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.112615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.113049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.113066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.113073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.113247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.113414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.113422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.113428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.116023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.125364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.125830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.125873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.125896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.126489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.126905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.126913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.126919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.129476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.138214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.138505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.138521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.138528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.138694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.138860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.138868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.138874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.141478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.151044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.151414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.151431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.151438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.151604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.151770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.151778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.151785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.154395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.163932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.164268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.164284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.164291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.164464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.164630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.164638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.164645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.167250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.126 [2024-12-16 12:58:43.176685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.126 [2024-12-16 12:58:43.177102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.126 [2024-12-16 12:58:43.177123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.126 [2024-12-16 12:58:43.177131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.126 [2024-12-16 12:58:43.177298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.126 [2024-12-16 12:58:43.177464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.126 [2024-12-16 12:58:43.177472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.126 [2024-12-16 12:58:43.177478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.126 [2024-12-16 12:58:43.180076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.387 [2024-12-16 12:58:43.189712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.387 [2024-12-16 12:58:43.190175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.387 [2024-12-16 12:58:43.190221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.387 [2024-12-16 12:58:43.190244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.387 [2024-12-16 12:58:43.190769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.387 [2024-12-16 12:58:43.190937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.387 [2024-12-16 12:58:43.190945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.387 [2024-12-16 12:58:43.190951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.387 [2024-12-16 12:58:43.193606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.387 [2024-12-16 12:58:43.202457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.387 [2024-12-16 12:58:43.202893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.387 [2024-12-16 12:58:43.202937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.387 [2024-12-16 12:58:43.202960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.387 [2024-12-16 12:58:43.203553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.387 [2024-12-16 12:58:43.204014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.387 [2024-12-16 12:58:43.204023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.387 [2024-12-16 12:58:43.204032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.387 [2024-12-16 12:58:43.206646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.387 [2024-12-16 12:58:43.215237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.387 [2024-12-16 12:58:43.215588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.387 [2024-12-16 12:58:43.215604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.387 [2024-12-16 12:58:43.215611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.387 [2024-12-16 12:58:43.215778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.387 [2024-12-16 12:58:43.215945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.387 [2024-12-16 12:58:43.215952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.387 [2024-12-16 12:58:43.215958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.387 [2024-12-16 12:58:43.218569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.387 [2024-12-16 12:58:43.228085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.387 [2024-12-16 12:58:43.228380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.387 [2024-12-16 12:58:43.228396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.387 [2024-12-16 12:58:43.228403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.387 [2024-12-16 12:58:43.228571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.387 [2024-12-16 12:58:43.228728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.228736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.228742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.231346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.240930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.241276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.241293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.241300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.241466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.241633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.241642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.241648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.244250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.253722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.254061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.254080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.254087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.254261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.254428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.254436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.254442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.257043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.266514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.266951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.266967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.266974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.267145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.267313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.267321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.267327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.269968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.279367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.279836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.279851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.279858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.280025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.280196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.280205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.280212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.282874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.292309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.292639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.292655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.292662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.292828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.293001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.293009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.293015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.295635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.305143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.305508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.305523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.305531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.305698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.305864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.305872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.305878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.308491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.317974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.318322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.318337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.318344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.318511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.318678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.318686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.318692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.321345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.330830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.331210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.331227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.331235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.331406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.331583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.331591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.331598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.334348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.343856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.344219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.344236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.344244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.344423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.344590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.344598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.388 [2024-12-16 12:58:43.344604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.388 [2024-12-16 12:58:43.347266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.388 [2024-12-16 12:58:43.356913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.388 [2024-12-16 12:58:43.357246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.388 [2024-12-16 12:58:43.357262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.388 [2024-12-16 12:58:43.357270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.388 [2024-12-16 12:58:43.357436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.388 [2024-12-16 12:58:43.357603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.388 [2024-12-16 12:58:43.357611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.357617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.360293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.369632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.370095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.370154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.370178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.370756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.371306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.371315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.371321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.373918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.382520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.382867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.382883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.382893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.383060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.383234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.383242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.383248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.385850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.395421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.395874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.395890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.395897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.396064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.396236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.396245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.396251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.398851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.408196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.408491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.408506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.408514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.408680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.408847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.408856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.408862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.411506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.421210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.421613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.421630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.421637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.421809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.421980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.421992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.421998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.424746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.434308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.434650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.434666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.434673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.434845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.435017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.435025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.435032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.437778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.389 [2024-12-16 12:58:43.447227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.389 [2024-12-16 12:58:43.447561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.389 [2024-12-16 12:58:43.447577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.389 [2024-12-16 12:58:43.447584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.389 [2024-12-16 12:58:43.447756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.389 [2024-12-16 12:58:43.447927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.389 [2024-12-16 12:58:43.447935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.389 [2024-12-16 12:58:43.447943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.389 [2024-12-16 12:58:43.450740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.650 [2024-12-16 12:58:43.460102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.650 [2024-12-16 12:58:43.460470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.650 [2024-12-16 12:58:43.460513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.650 [2024-12-16 12:58:43.460536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.650 [2024-12-16 12:58:43.461130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.650 [2024-12-16 12:58:43.461588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.650 [2024-12-16 12:58:43.461597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.650 [2024-12-16 12:58:43.461603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.650 [2024-12-16 12:58:43.464209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.650 [2024-12-16 12:58:43.472988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.650 [2024-12-16 12:58:43.473337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.650 [2024-12-16 12:58:43.473353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.650 [2024-12-16 12:58:43.473360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.650 [2024-12-16 12:58:43.473527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.650 [2024-12-16 12:58:43.473694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.650 [2024-12-16 12:58:43.473702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.650 [2024-12-16 12:58:43.473708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.650 [2024-12-16 12:58:43.476316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.650 [2024-12-16 12:58:43.485771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.650 [2024-12-16 12:58:43.486200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.650 [2024-12-16 12:58:43.486216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.650 [2024-12-16 12:58:43.486223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.486390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.486556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.486565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.486571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.489190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.498555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.498987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.499002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.499009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.499182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.499349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.499357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.499363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.501963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.511297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.511728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.511744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.511751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.511921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.512088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.512096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.512102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 5796.40 IOPS, 22.64 MiB/s [2024-12-16T11:58:43.718Z] [2024-12-16 12:58:43.515866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.524245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.524583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.524599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.524606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.524773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.524939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.524947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.524954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.527625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.536988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.537356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.537372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.537379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.537546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.537712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.537720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.537726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.540331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.549892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.550276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.550292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.550299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.550466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.550632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.550642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.550649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.553253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.562706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.563035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.563051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.563058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.563232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.563398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.563406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.563413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.566012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.575554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.575973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.576016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.576039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.576569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.576737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.576745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.576751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.579371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.588387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.588824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.588840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.588848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.589015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.589212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.589221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.589228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.591967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.601278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.601633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.601648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.601656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.601827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.601999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.602007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.651 [2024-12-16 12:58:43.602013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.651 [2024-12-16 12:58:43.604694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.651 [2024-12-16 12:58:43.614290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.651 [2024-12-16 12:58:43.614708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.651 [2024-12-16 12:58:43.614723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.651 [2024-12-16 12:58:43.614730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.651 [2024-12-16 12:58:43.614896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.651 [2024-12-16 12:58:43.615063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.651 [2024-12-16 12:58:43.615072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.615078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.617689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.627096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.627502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.627517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.627524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.627681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.627839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.627847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.627853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.630407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.639823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.640215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.640232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.640239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.640409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.640576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.640584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.640590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.643198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.652636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.652960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.652975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.652982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.653161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.653329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.653337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.653343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.655942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.665399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.665810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.665826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.665832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.665990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.666170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.666178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.666184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.668783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.678162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.678499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.678541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.678564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.679084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.679483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.679502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.679522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.685761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.693192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.693697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.693740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.693763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.694226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.694482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.694493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.694503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.698552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.652 [2024-12-16 12:58:43.706167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.652 [2024-12-16 12:58:43.706591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.652 [2024-12-16 12:58:43.706635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.652 [2024-12-16 12:58:43.706659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.652 [2024-12-16 12:58:43.707161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.652 [2024-12-16 12:58:43.707329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.652 [2024-12-16 12:58:43.707337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.652 [2024-12-16 12:58:43.707344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.652 [2024-12-16 12:58:43.710007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.913 [2024-12-16 12:58:43.719041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.913 [2024-12-16 12:58:43.719465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.913 [2024-12-16 12:58:43.719481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.913 [2024-12-16 12:58:43.719488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.913 [2024-12-16 12:58:43.719655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.913 [2024-12-16 12:58:43.719821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.913 [2024-12-16 12:58:43.719829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.913 [2024-12-16 12:58:43.719836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.722521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.731838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.732290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.732342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.732366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.732945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.733386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.733395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.733401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.736000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.744573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.744992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.745034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.745057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.745473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.745641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.745649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.745655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.748277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.757310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.757735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.757779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.757802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.758287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.758454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.758462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.758468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.761065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.770075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.770500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.770544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.770568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.771080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.771254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.771263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.771269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.773870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.782942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.783410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.783454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.783477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.784009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.784182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.784191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.784197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.786797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.795767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.796186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.796203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.796210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.796377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.796543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.796551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.796557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.799165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.808575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.808918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.808961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.808984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.809442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.809610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.809618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.809624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.812293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.821554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.821989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.822005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.822012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.822189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.822367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.822375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.822381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.825033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.834382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.834803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.834819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.834825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.834992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.835166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.835176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.835182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.837781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.847108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.914 [2024-12-16 12:58:43.847541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.914 [2024-12-16 12:58:43.847556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.914 [2024-12-16 12:58:43.847563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.914 [2024-12-16 12:58:43.847730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.914 [2024-12-16 12:58:43.847896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.914 [2024-12-16 12:58:43.847904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.914 [2024-12-16 12:58:43.847910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.914 [2024-12-16 12:58:43.850665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.914 [2024-12-16 12:58:43.860171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.860594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.860610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.860621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.860793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.860964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.860973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.860979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.863662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.873049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.873482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.873499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.873506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.873672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.873838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.873846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.873853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.876514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.885898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.886312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.886327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.886334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.886492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.886650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.886658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.886663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.889259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.898717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.899060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.899104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.899141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.899606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.899773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.899784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.899790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.902395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.911473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.911889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.911904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.911911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.912068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.912252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.912261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.912267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.914868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.924281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.924705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.924748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.924772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.925364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.925770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.925777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.925783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.928304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.937092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.937508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.937524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.937531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.937688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.937846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.937854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.937860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.940463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.949917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.950331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.950346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.950353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.950511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.950668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.950676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.950682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.953276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.962635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.962976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.962991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.962998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.963177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.963345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.963353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.963359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.915 [2024-12-16 12:58:43.965955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.915 [2024-12-16 12:58:43.975609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.915 [2024-12-16 12:58:43.976030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.915 [2024-12-16 12:58:43.976046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:17.915 [2024-12-16 12:58:43.976054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:17.915 [2024-12-16 12:58:43.976226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:17.915 [2024-12-16 12:58:43.976393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.915 [2024-12-16 12:58:43.976401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.915 [2024-12-16 12:58:43.976407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.175 [2024-12-16 12:58:43.979107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.175 [2024-12-16 12:58:43.988336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.175 [2024-12-16 12:58:43.988752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.175 [2024-12-16 12:58:43.988767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.175 [2024-12-16 12:58:43.988774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.175 [2024-12-16 12:58:43.988935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.175 [2024-12-16 12:58:43.989093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.175 [2024-12-16 12:58:43.989101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.175 [2024-12-16 12:58:43.989107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.175 [2024-12-16 12:58:43.991726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.175 [2024-12-16 12:58:44.001191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.175 [2024-12-16 12:58:44.001613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.175 [2024-12-16 12:58:44.001629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.175 [2024-12-16 12:58:44.001636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.175 [2024-12-16 12:58:44.001794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.175 [2024-12-16 12:58:44.001951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.175 [2024-12-16 12:58:44.001959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.175 [2024-12-16 12:58:44.001965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.175 [2024-12-16 12:58:44.004636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.175 [2024-12-16 12:58:44.013960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.175 [2024-12-16 12:58:44.014314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.175 [2024-12-16 12:58:44.014330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.175 [2024-12-16 12:58:44.014338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.175 [2024-12-16 12:58:44.014504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.014672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.014680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.014686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.017292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.026796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.027207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.027223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.027230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.027388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.027546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.027553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.027562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.030153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.039598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.039990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.040036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.040060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.040653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.040869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.040877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.040884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 578692 Killed "${NVMF_APP[@]}" "$@" 00:37:18.176 [2024-12-16 12:58:44.043526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=579885 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 579885 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:18.176 [2024-12-16 12:58:44.052656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 579885 ']' 00:37:18.176 [2024-12-16 12:58:44.053081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.053097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.053105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.176 [2024-12-16 12:58:44.053279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.053451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.053460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.053468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:18.176 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.176 [2024-12-16 12:58:44.056212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.065755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.066188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.066204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.066211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.066382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.066554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.066563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.066569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.069312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.078724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.079156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.079174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.079181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.079357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.079531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.079539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.079546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.082296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.091695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.092124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.092142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.092149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.092322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.092494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.092502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.092509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.095262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.098023] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:18.176 [2024-12-16 12:58:44.098064] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.176 [2024-12-16 12:58:44.104795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.105225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.105242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.105250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.105422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.105594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.105602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.105609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.108367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.117848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.118258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.176 [2024-12-16 12:58:44.118276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.176 [2024-12-16 12:58:44.118283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.176 [2024-12-16 12:58:44.118455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.176 [2024-12-16 12:58:44.118627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.176 [2024-12-16 12:58:44.118635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.176 [2024-12-16 12:58:44.118642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.176 [2024-12-16 12:58:44.121388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.176 [2024-12-16 12:58:44.130919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.176 [2024-12-16 12:58:44.131279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.131297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.131304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.131476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.131647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.131656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.131662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.134376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.143871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.144319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.144336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.144344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.144516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.144687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.144695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.144702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.147416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.153949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:18.177 [2024-12-16 12:58:44.156821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.157269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.157286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.157294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.157466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.157638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.157646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.157653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.160372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.169711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.170072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.170088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.170096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.170273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.170445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.170454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.170461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.173232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.182647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.183141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.183165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.183174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.183354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.183529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.183538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.183545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.186259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.194257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.177 [2024-12-16 12:58:44.194283] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.177 [2024-12-16 12:58:44.194290] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.177 [2024-12-16 12:58:44.194296] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.177 [2024-12-16 12:58:44.194301] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.177 [2024-12-16 12:58:44.194358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:18.177 [2024-12-16 12:58:44.194397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.177 [2024-12-16 12:58:44.194399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:18.177 [2024-12-16 12:58:44.195667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.196125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.196146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.196155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.196330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.196504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.196513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.196520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.199264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.208673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.209142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.209164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.209174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.209348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.209522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.209531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.209539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.212289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.221684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.222139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.222161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.222171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.222345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.222519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.222528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.222536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.225280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.177 [2024-12-16 12:58:44.234666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.177 [2024-12-16 12:58:44.235132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.177 [2024-12-16 12:58:44.235153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.177 [2024-12-16 12:58:44.235163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.177 [2024-12-16 12:58:44.235337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.177 [2024-12-16 12:58:44.235511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.177 [2024-12-16 12:58:44.235519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.177 [2024-12-16 12:58:44.235527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.177 [2024-12-16 12:58:44.238272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.437 [2024-12-16 12:58:44.247684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.437 [2024-12-16 12:58:44.248139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.437 [2024-12-16 12:58:44.248161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.437 [2024-12-16 12:58:44.248170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.437 [2024-12-16 12:58:44.248345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.437 [2024-12-16 12:58:44.248519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.437 [2024-12-16 12:58:44.248528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.437 [2024-12-16 12:58:44.248536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.437 [2024-12-16 12:58:44.251282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.437 [2024-12-16 12:58:44.260664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.437 [2024-12-16 12:58:44.261096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.437 [2024-12-16 12:58:44.261117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.437 [2024-12-16 12:58:44.261126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.437 [2024-12-16 12:58:44.261305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.437 [2024-12-16 12:58:44.261477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.437 [2024-12-16 12:58:44.261486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.437 [2024-12-16 12:58:44.261493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.437 [2024-12-16 12:58:44.264230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.437 [2024-12-16 12:58:44.273767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.437 [2024-12-16 12:58:44.274129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.437 [2024-12-16 12:58:44.274145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.437 [2024-12-16 12:58:44.274153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.437 [2024-12-16 12:58:44.274326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.437 [2024-12-16 12:58:44.274499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.437 [2024-12-16 12:58:44.274507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.437 [2024-12-16 12:58:44.274514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.437 [2024-12-16 12:58:44.277255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.437 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:18.437 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:18.437 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:18.437 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:18.437 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.437 [2024-12-16 12:58:44.286804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.437 [2024-12-16 12:58:44.287229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.437 [2024-12-16 12:58:44.287246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.437 [2024-12-16 12:58:44.287254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.437 [2024-12-16 12:58:44.287426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.437 [2024-12-16 12:58:44.287600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.437 [2024-12-16 12:58:44.287609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.437 [2024-12-16 12:58:44.287616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.437 [2024-12-16 12:58:44.290372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.437 [2024-12-16 12:58:44.299774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.437 [2024-12-16 12:58:44.300057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.437 [2024-12-16 12:58:44.300073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.437 [2024-12-16 12:58:44.300081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.437 [2024-12-16 12:58:44.300263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.437 [2024-12-16 12:58:44.300437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.300445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.300452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 [2024-12-16 12:58:44.303195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 [2024-12-16 12:58:44.312764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 [2024-12-16 12:58:44.313190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.438 [2024-12-16 12:58:44.313207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.438 [2024-12-16 12:58:44.313218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.438 [2024-12-16 12:58:44.313390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.438 [2024-12-16 12:58:44.313563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.313572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.313579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 [2024-12-16 12:58:44.316327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.438 [2024-12-16 12:58:44.325718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 [2024-12-16 12:58:44.326053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.438 [2024-12-16 12:58:44.326070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.438 [2024-12-16 12:58:44.326078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.438 [2024-12-16 12:58:44.326256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.438 [2024-12-16 12:58:44.326429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.326437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.326444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 [2024-12-16 12:58:44.327961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.438 [2024-12-16 12:58:44.329188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 [2024-12-16 12:58:44.338705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 [2024-12-16 12:58:44.339031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.438 [2024-12-16 12:58:44.339046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.438 [2024-12-16 12:58:44.339057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.438 [2024-12-16 12:58:44.339228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.438 [2024-12-16 12:58:44.339397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.339404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.339411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 [2024-12-16 12:58:44.342121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.438 [2024-12-16 12:58:44.351666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 [2024-12-16 12:58:44.352019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.438 [2024-12-16 12:58:44.352036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.438 [2024-12-16 12:58:44.352043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.438 [2024-12-16 12:58:44.352220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.438 [2024-12-16 12:58:44.352393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.352401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.352408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 [2024-12-16 12:58:44.355151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 [2024-12-16 12:58:44.364701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 [2024-12-16 12:58:44.365139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.438 [2024-12-16 12:58:44.365158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.438 [2024-12-16 12:58:44.365166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.438 [2024-12-16 12:58:44.365338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.438 [2024-12-16 12:58:44.365511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.365519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.365526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 Malloc0 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.438 [2024-12-16 12:58:44.368270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 [2024-12-16 12:58:44.377658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 [2024-12-16 12:58:44.378064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.438 [2024-12-16 12:58:44.378080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2e90 with addr=10.0.0.2, port=4420 00:37:18.438 [2024-12-16 12:58:44.378088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2e90 is same with the state(6) to be set 00:37:18.438 [2024-12-16 12:58:44.378265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2e90 (9): Bad file descriptor 00:37:18.438 [2024-12-16 12:58:44.378439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.438 [2024-12-16 12:58:44.378447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.438 [2024-12-16 12:58:44.378453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.438 [2024-12-16 12:58:44.381195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.438 [2024-12-16 12:58:44.390297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.438 [2024-12-16 12:58:44.390742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.438 12:58:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 578988 00:37:18.438 [2024-12-16 12:58:44.432722] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:19.641 4989.50 IOPS, 19.49 MiB/s [2024-12-16T11:58:46.645Z] 5909.57 IOPS, 23.08 MiB/s [2024-12-16T11:58:47.583Z] 6614.50 IOPS, 25.84 MiB/s [2024-12-16T11:58:48.963Z] 7155.00 IOPS, 27.95 MiB/s [2024-12-16T11:58:49.532Z] 7575.00 IOPS, 29.59 MiB/s [2024-12-16T11:58:50.912Z] 7926.09 IOPS, 30.96 MiB/s [2024-12-16T11:58:51.850Z] 8196.83 IOPS, 32.02 MiB/s [2024-12-16T11:58:52.788Z] 8442.31 IOPS, 32.98 MiB/s [2024-12-16T11:58:53.754Z] 8659.71 IOPS, 33.83 MiB/s [2024-12-16T11:58:53.754Z] 8837.00 IOPS, 34.52 MiB/s 00:37:27.687 Latency(us) 00:37:27.687 [2024-12-16T11:58:53.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:27.687 Verification LBA range: start 0x0 length 0x4000 00:37:27.687 Nvme1n1 : 15.04 8811.91 34.42 11122.61 0.00 6385.20 417.40 43441.01 00:37:27.687 [2024-12-16T11:58:53.754Z] =================================================================================================================== 00:37:27.687 [2024-12-16T11:58:53.754Z] Total : 8811.91 34.42 11122.61 0.00 6385.20 417.40 43441.01 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.947 rmmod nvme_tcp 00:37:27.947 rmmod nvme_fabrics 00:37:27.947 rmmod nvme_keyring 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 579885 ']' 00:37:27.947 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 579885 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 579885 ']' 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 579885 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 579885 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 579885' 00:37:27.948 killing process with pid 579885 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 579885 00:37:27.948 12:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 579885 00:37:28.207 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.208 12:58:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.117 12:58:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.117 00:37:30.117 real 0m26.062s 00:37:30.117 user 1m1.007s 00:37:30.117 sys 0m6.637s 00:37:30.117 12:58:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:30.117 12:58:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:30.117 ************************************ 00:37:30.117 END TEST nvmf_bdevperf 00:37:30.117 ************************************ 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.377 ************************************ 00:37:30.377 START TEST nvmf_target_disconnect 00:37:30.377 ************************************ 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:30.377 * Looking for test storage... 00:37:30.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.377 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.378 --rc genhtml_branch_coverage=1 00:37:30.378 --rc genhtml_function_coverage=1 00:37:30.378 --rc genhtml_legend=1 00:37:30.378 --rc geninfo_all_blocks=1 00:37:30.378 --rc geninfo_unexecuted_blocks=1 00:37:30.378 00:37:30.378 ' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.378 --rc genhtml_branch_coverage=1 00:37:30.378 --rc genhtml_function_coverage=1 00:37:30.378 --rc genhtml_legend=1 00:37:30.378 --rc geninfo_all_blocks=1 00:37:30.378 --rc geninfo_unexecuted_blocks=1 00:37:30.378 00:37:30.378 ' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.378 --rc genhtml_branch_coverage=1 00:37:30.378 --rc genhtml_function_coverage=1 00:37:30.378 --rc genhtml_legend=1 00:37:30.378 --rc geninfo_all_blocks=1 00:37:30.378 --rc geninfo_unexecuted_blocks=1 00:37:30.378 00:37:30.378 ' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.378 --rc genhtml_branch_coverage=1 00:37:30.378 --rc genhtml_function_coverage=1 00:37:30.378 --rc genhtml_legend=1 00:37:30.378 --rc geninfo_all_blocks=1 00:37:30.378 --rc geninfo_unexecuted_blocks=1 00:37:30.378 00:37:30.378 ' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:30.378 12:58:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:36.956 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:36.956 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:36.956 Found net devices under 0000:af:00.0: cvl_0_0 00:37:36.956 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:36.957 Found net devices under 0000:af:00.1: cvl_0_1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:36.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:36.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:37:36.957 00:37:36.957 --- 10.0.0.2 ping statistics --- 00:37:36.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.957 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:36.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:36.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:37:36.957 00:37:36.957 --- 10.0.0.1 ping statistics --- 00:37:36.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.957 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:36.957 ************************************ 00:37:36.957 START TEST nvmf_target_disconnect_tc1 00:37:36.957 ************************************ 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:36.957 [2024-12-16 12:59:02.419596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-12-16 12:59:02.419641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e5090 with addr=10.0.0.2, port=4420 00:37:36.957 [2024-12-16 12:59:02.419661] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:36.957 [2024-12-16 12:59:02.419674] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:36.957 [2024-12-16 12:59:02.419680] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:36.957 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:36.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:36.957 Initializing NVMe Controllers 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:36.957 00:37:36.957 real 0m0.114s 00:37:36.957 user 0m0.036s 00:37:36.957 sys 0m0.078s 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:36.957 ************************************ 00:37:36.957 END TEST nvmf_target_disconnect_tc1 00:37:36.957 ************************************ 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:36.957 ************************************ 00:37:36.957 START TEST nvmf_target_disconnect_tc2 00:37:36.957 ************************************ 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:36.957 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=584931 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 584931 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 584931 ']' 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 [2024-12-16 12:59:02.529433] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:36.958 [2024-12-16 12:59:02.529479] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.958 [2024-12-16 12:59:02.603509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:36.958 [2024-12-16 12:59:02.643452] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.958 [2024-12-16 12:59:02.643492] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.958 [2024-12-16 12:59:02.643499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.958 [2024-12-16 12:59:02.643505] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.958 [2024-12-16 12:59:02.643510] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.958 [2024-12-16 12:59:02.643636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:37:36.958 [2024-12-16 12:59:02.643832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:36.958 [2024-12-16 12:59:02.643742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:37:36.958 [2024-12-16 12:59:02.643834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 Malloc0 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 [2024-12-16 12:59:02.804621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 [2024-12-16 12:59:02.836874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=584953 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:36.958 12:59:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:38.872 12:59:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 584931 00:37:38.872 12:59:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 [2024-12-16 12:59:04.864636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Write completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.872 starting I/O failed 00:37:38.872 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 [2024-12-16 12:59:04.864831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 [2024-12-16 12:59:04.865034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Read completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 Write completed with error (sct=0, sc=8) 00:37:38.873 starting I/O failed 00:37:38.873 [2024-12-16 12:59:04.865225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:38.873 [2024-12-16 12:59:04.865435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.865507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.865784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.865833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.866034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.866068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.866284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.866320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.866536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.866546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.866688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.866717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.866955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.866987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.867268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.867300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.867499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.867530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.867775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.867785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.867945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.867976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.873 [2024-12-16 12:59:04.868195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.873 [2024-12-16 12:59:04.868228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.873 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.868379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.868410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.868643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.868653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.868902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.868934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.869142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.869176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.869319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.869359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.869556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.869588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.869718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.869751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.870008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.870040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.870306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.870327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.870423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.870442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.870603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.870623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.870848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.870879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.871067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.871098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.871293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.871327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.871452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.871471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.871582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.871602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.871799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.871820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.871997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.872017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.872278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.872300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.872404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.872423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.872583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.872604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.872771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.872792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.873012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.873033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.873199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.873221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.873409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.873430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.873526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.873545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.873745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.873765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.873920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.873941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.874157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.874178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.874363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.874383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.874498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.874518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.874687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.874707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.874874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.874894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.875119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.875140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.875245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.875264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.875350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.875368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.875513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.875534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.875697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.875717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.874 [2024-12-16 12:59:04.875898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.874 [2024-12-16 12:59:04.875918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.874 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.876182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.876203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.876417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.876437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.876561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.876582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.876690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.876710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.876882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.876902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.877137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.877159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.877348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.877369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.877467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.877487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.877676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.877696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.877889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.877909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.878060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.878080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.878270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.878291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.878388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.878407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.878527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.878548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.878726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.878757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.878950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.878982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.879170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.879203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.879308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.879340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.879531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.879562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.879695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.879726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.879911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.879932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.880153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.880186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.880366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.880399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.880593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.880624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.880736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.880767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.880958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.880990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.881217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.881251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.881378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.881409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.881544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.881575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.881773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.881804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.881991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.882023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.882247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.882279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.882487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.882519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.882770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.882808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.883044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.883076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.883224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.883258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.883396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.883428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.883682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.883714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.883895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.883927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.875 qpair failed and we were unable to recover it. 00:37:38.875 [2024-12-16 12:59:04.884096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.875 [2024-12-16 12:59:04.884137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.884338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.884370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.884551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.884582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.884782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.884815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.885079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.885110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.885335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.885367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.885495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.885527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.885629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.885664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.885871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.885904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.886150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.886183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.886424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.886456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.886595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.886626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.886889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.886920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.887123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.887157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.887344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.887376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.887639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.887670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.887949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.887981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.888158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.888190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.888368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.888400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.888585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.888617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.888917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.888948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.889129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.889173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.889434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.889466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.889736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.889768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.889979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.890010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.890206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.890239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.890496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.890528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.890793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.890825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.891062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.891094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.891296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.891329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.891516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.891548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.891751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.891782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.891965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.891997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.892277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.892310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.892565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.892597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.892966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.892998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.893199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.893232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.893378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.893409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.893602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.893632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.876 qpair failed and we were unable to recover it. 00:37:38.876 [2024-12-16 12:59:04.893765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.876 [2024-12-16 12:59:04.893797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.894048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.894080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.894363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.894396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.894661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.894693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.894984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.895016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.895210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.895243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.895504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.895536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.895706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.895738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.895926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.895957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.896159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.896197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.896369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.896402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.896526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.896557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.896673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.896704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.896895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.896928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.897062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.897093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.897363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.897395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.897654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.897686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.897967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.897998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.898211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.898248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.898503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.898535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.898677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.898710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.898971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.899004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.899142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.899176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.899375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.899408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.899598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.899630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.899919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.899951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.900207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.900241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.900484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.900517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.900710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.900742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.900957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.900989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.901235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.901270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.901510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.901542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.901758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.901790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.877 qpair failed and we were unable to recover it. 00:37:38.877 [2024-12-16 12:59:04.902085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.877 [2024-12-16 12:59:04.902128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.902439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.902471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.902606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.902638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.902821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.902852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.903186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.903220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.903459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.903491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.903683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.903714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.903898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.903929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.904169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.904202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.904399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.904431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.904673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.904705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.904950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.904982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.905220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.905253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.905444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.905476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.905592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.905624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.905838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.905870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.906109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.906153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.906348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.906380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.906621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.906653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.906971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.907002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.907263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.907301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.907545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.907578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.907859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.907891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.908161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.908194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.908399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.908432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.908692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.908724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.908894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.908926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.909157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.909192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.909433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.909466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.909661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.909693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.909876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.909909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.910180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.910215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.910359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.910391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.910517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.910549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.910744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.910776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.910978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.911010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.911213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.911247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.911440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.911472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.911679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.911711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.911880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.878 [2024-12-16 12:59:04.911913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.878 qpair failed and we were unable to recover it. 00:37:38.878 [2024-12-16 12:59:04.912183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.912217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.912418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.912450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.912715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.912747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.912941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.912972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.913234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.913273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.913480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.913512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.913710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.913742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.914023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.914055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.914264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.914297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.914438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.914470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.914642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.914673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.914936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.914968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.915147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.915180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.915420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.915452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.915559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.915591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.915782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.915814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.916032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.916064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.916333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.916366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.916567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.916599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.916835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.916867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.917079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.917110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.917265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.917298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.917543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.917575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.917933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.917964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.918249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.918282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.918521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.918553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.918746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.918777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.918961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.918992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.919240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.919274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.919412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.919443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.919616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.919648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.919853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.919892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.920143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.920176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.920350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.920382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.920559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.920590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.920851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.920883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.921176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.921209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.921388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.921420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.879 [2024-12-16 12:59:04.921615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.879 [2024-12-16 12:59:04.921647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.879 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.921856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.921888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.922083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.922138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.922380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.922414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.922562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.922592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.922729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.922761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.923025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.923057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.923260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.923295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.923473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.923505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.923708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.923740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.924004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.924035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.924151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.924185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.924428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.924460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.924637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.924669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.924957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.924989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.925236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.925269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.925505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.925537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.925785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.925816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.926015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.926047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.926363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.926395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.926546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.926578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.926701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.926732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.926925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.926957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.927255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.927288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.927424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.927454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.927652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.927684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.927884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.927915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.928106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.928219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.928353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.928384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.928652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.928683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.928923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.928954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.929219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.929253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.929448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.929480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.929699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.929732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.929935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.929967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.930194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.930228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.930472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.930505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.930758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.930790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.931053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.931085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.931250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.931283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.880 qpair failed and we were unable to recover it. 00:37:38.880 [2024-12-16 12:59:04.931535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.880 [2024-12-16 12:59:04.931567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.881 qpair failed and we were unable to recover it. 00:37:38.881 [2024-12-16 12:59:04.931689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.881 [2024-12-16 12:59:04.931721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.881 qpair failed and we were unable to recover it. 00:37:38.881 [2024-12-16 12:59:04.931825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.881 [2024-12-16 12:59:04.931857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:38.881 qpair failed and we were unable to recover it. 00:37:38.881 [2024-12-16 12:59:04.932057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.932091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.932257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.932290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.932438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.932471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.932721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.932771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.933065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.933098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.933247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.933280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.933476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.933508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.933724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.933756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.934023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.934054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.934232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.934266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.934464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.934495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.934683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.934714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.934978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.935010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.935281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.935315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.935511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.935543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.935688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.935720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.935858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.935890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.936096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.936139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.936405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.936442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.936640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.936673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.936910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.936942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.937150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.937183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.937402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.937435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.937678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.937710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.937917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.937949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.938135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.938168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.938295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.938327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.938519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.938550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.938864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.938896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.939018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.939050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.939297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.939331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.939468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.939500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.939712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.939744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.157 [2024-12-16 12:59:04.939997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.157 [2024-12-16 12:59:04.940028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.157 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.940233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.940267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.940505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.940537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.940726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.940757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.941045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.941077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.941311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.941345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.941536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.941566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.941790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.941821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.941951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.941983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.942236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.942269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.942414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.942446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.942753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.942784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.943050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.943088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.943299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.943332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.943628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.943670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.943924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.943957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.944154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.944187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.944459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.944491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.944692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.944723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.944927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.944958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.945206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.945240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.945415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.945446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.945638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.945670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.945949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.945981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.946190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.946223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.946414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.946447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.946750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.946782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.946988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.947020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.947134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.947168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.947393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.947426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.947619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.947651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.947906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.947938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.948139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.948174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.948320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.948351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.948494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.948526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.948802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.948834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.949047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.949079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.949334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.949367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.949640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.949672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.158 [2024-12-16 12:59:04.949855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.158 [2024-12-16 12:59:04.949892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.158 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.950166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.950199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.950446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.950479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.950679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.950711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.950922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.950953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.951169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.951203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.951334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.951366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.951512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.951544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.951748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.951780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.951962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.951994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.952272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.952306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.952469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.952501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.952686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.952718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.952864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.952896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.953166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.953199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.953387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.953420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.953629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.953660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.953789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.953821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.954130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.954163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.954383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.954415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.954674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.954708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.954953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.954985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.955165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.955200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.955420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.955452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.955581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.955613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.955899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.955931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.956204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.956237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.956431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.956462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.956669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.956701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.956945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.956978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.957133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.957167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.957388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.957420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.957617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.957648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.957955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.957988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.958238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.958272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.958407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.958438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.958637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.958669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.958921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.958953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.959222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.203 [2024-12-16 12:59:04.959256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.203 qpair failed and we were unable to recover it. 00:37:39.203 [2024-12-16 12:59:04.959383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.959415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.959610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.959642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.959832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.959864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.960061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.960094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.960328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.960360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.960496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.960528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.960654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.960686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.960866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.960897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.960995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.961027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.961278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.961313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.961430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.961462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.961590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.961623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.961842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.961874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.961993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.962025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.962239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.962272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.962490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.962522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.962719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.962751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.962939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.962971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.963170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.963203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.963466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.963498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.963699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.963732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.963885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.963917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.964255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.964288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.964576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.964607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.964753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.964785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.964997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.965029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.965222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.965257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.965388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.965420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.965561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.965594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.965912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.965952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.966257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.966291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.966439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.966471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.966588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.966620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.966759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.966791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.967038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.967071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.967225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.967258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.967469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.967501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.967697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.967730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.967877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.967910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.204 [2024-12-16 12:59:04.968127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.204 [2024-12-16 12:59:04.968160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.204 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.968297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.968330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.968471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.968504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.968702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.968736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.969057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.969089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.969252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.969287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.969422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.969455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.969665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.969697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.969919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.969951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.970201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.970238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.970382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.970412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.970540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.970570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.970703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.970734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.970980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.971011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.971197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.971230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.971367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.971400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.971542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.971575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.971853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.971892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.972167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.972202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.972339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.972370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.972571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.972602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.972748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.972799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.973019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.973051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.973273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.973307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.973507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.973539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.973889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.973921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.974216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.974250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.974437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.974469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.974821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.974853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.975048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.975080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.975286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.975320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.975460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.975492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.975743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.975775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.976049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.976082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.976311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.205 [2024-12-16 12:59:04.976344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.205 qpair failed and we were unable to recover it. 00:37:39.205 [2024-12-16 12:59:04.976571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.976604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.976812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.976844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.977139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.977172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.977371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.977403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.977601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.977633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.977930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.977962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.978255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.978290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.978440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.978472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.978659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.978692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.979015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.979046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.979231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.979265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.979510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.979542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.979817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.979848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.980059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.980091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.980358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.980391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.980658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.980690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.980916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.980948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.981196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.981229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.981374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.981406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.981550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.981581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.981851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.981883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.982016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.982048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.982301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.982335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.982608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.982641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.982929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.982960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.983183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.983217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.983418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.983450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.983702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.983733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.984051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.984083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.984355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.984438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.984679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.984724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.985031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.985073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.985429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.985470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.985794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.985834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.986064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.986104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.986361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.986401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.986624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.986664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.986929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.986968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.987268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.987310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.206 qpair failed and we were unable to recover it. 00:37:39.206 [2024-12-16 12:59:04.987589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.206 [2024-12-16 12:59:04.987629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.987883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.987923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.988228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.988270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.988549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.988589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.988945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.988985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.989254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.989295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.989464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.989510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.989697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.989737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.990037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.990078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.990398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.990439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.990717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.990757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.991027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.991066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.991320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.991361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.991671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.991710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.991953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.991992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.992221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.992263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.992497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.992536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.992820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.992860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.993133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.993179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.993371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.993411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.993687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.993726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.994025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.994065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.994371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.994414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.994595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.994636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.994931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.994986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.995223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.995265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.995498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.995538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.995831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.995871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.996130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.996172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.996412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.996452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.996676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.996715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.997016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.997056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.997250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.997291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.997530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.997570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.997758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.997804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.998040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.998082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.998336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.998376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.998551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.998590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.998856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.998898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.999078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.207 [2024-12-16 12:59:04.999143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.207 qpair failed and we were unable to recover it. 00:37:39.207 [2024-12-16 12:59:04.999380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:04.999421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:04.999608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:04.999649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.000025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.000065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.000319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.000360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.000640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.000680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.001012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.001052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.001393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.001433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.001633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.001674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.001913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.001951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.002236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.002278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.002457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.002496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.002840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.002879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.003175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.003217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.003527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.003567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.003834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.003874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.004244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.004286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.004585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.004624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.004904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.004944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.005225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.005267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.005486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.005526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.005688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.005736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.006044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.006085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.006267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.006313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.006624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.006664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.006956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.007004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.007288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.007330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.007575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.007617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.007944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.007984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.008169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.008215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.008411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.008451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.008780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.008821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.009139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.009180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.009487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.009527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.009781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.009822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.010142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.010183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.010371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.010412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.010594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.010635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.010956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.010996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.011307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.208 [2024-12-16 12:59:05.011351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.208 qpair failed and we were unable to recover it. 00:37:39.208 [2024-12-16 12:59:05.011584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.011624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.011935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.011975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.012216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.012258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.012418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.012464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.012745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.012786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.013010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.013049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.013302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.013343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.013639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.013679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.013929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.013969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.014162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.014204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.014433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.014473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.014778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.014819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.015091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.015143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.015395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.015435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.015721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.015761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.016057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.016097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.016406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.016447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.016749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.016790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.017103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.017155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.017393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.017433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.017760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.017800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.018018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.018057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.018346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.018388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.018712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.018753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.019090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.019151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.019382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.019431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.019674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.019712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.020032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.020072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.020324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.020365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.020684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.020724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.021045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.021084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.021406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.021447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.021731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.021771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.022034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.022074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.022311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.022351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.022566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.022606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.022836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.022876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.209 qpair failed and we were unable to recover it. 00:37:39.209 [2024-12-16 12:59:05.023202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.209 [2024-12-16 12:59:05.023244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.023492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.023532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.023894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.023934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.024181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.024223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.024531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.024572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.024816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.024856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.025101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.025151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.025481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.025521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.025844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.025883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.026159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.026200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.026506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.026546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.026844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.026883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.027157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.027198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.027436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.027476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.027689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.027730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.028068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.028108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.028379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.028419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.028723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.028763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.028983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.029023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.029207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.029255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.029488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.029528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.029826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.029864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.030108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.030157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.030325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.030372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.030605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.030645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.030950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.030991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.031245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.031287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.031514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.031555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.031856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.031904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.032212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.032254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.032477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.032517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.032866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.032906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.033142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.033185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.033477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.033517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.033703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.033744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.034077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.034126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.034358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.034398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.034636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.210 [2024-12-16 12:59:05.034677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.210 qpair failed and we were unable to recover it. 00:37:39.210 [2024-12-16 12:59:05.034985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.035025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.035330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.035371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.035561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.035600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.035910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.035951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.036255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.036298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.036551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.036592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.036822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.036862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.037145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.037188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.037479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.037519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.037815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.037855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.038156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.038199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.038436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.038476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.038768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.038808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.039084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.039133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.039418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.039458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.039690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.039731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.039981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.040021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.040356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.040398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.040684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.040724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.041027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.041068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.041389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.041431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.041663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.041703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.042023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.042062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.042278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.042319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.042600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.042640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.042955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.042994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.043177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.043224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.043453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.043494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.043727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.043767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.044106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.044172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.044407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.044455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.044711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.044751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.045086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.045136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.045371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.045410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.045743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.045783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.046088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.046139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.046328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.046368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.046620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.046660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.046976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.047017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.211 [2024-12-16 12:59:05.047279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.211 [2024-12-16 12:59:05.047321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.211 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.047560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.047600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.047917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.047958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.048189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.048230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.048456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.048496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.048753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.048793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.049084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.049135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.049370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.049411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.049656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.049697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.049865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.049910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.050093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.050142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.050379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.050419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.050611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.050651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.050961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.051002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.051221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.051261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.051591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.051631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.051856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.051896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.052109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.052178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.052380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.052421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.052723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.052762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.053104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.053169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.053406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.053447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.053630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.053670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.053919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.053958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.054203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.054245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.054498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.054538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.054721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.054765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.055045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.055085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.055378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.055417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.055700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.055740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.056051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.056091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.056381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.056435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.056672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.056712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.057041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.057082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.057393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.057433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.057664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.057704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.057936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.057976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.058271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.058313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.058623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.058662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.058988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.212 [2024-12-16 12:59:05.059028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.212 qpair failed and we were unable to recover it. 00:37:39.212 [2024-12-16 12:59:05.059373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.059415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.059700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.059741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.060065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.060104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.060361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.060401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.060738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.060778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.061111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.061167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.061423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.061464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.061786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.061826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.062134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.062176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.062430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.062469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.062790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.062831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.063003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.063046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.063333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.063375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.063625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.063666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.063984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.064023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.064331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.064373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.064635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.064676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.064950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.064990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.065301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.065343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.065629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.065668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.065952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.065992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.066249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.066290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.066525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.066566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.066862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.066902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.067157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.067198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.067512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.067552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.067875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.067915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.068248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.068289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.068592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.068633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.068928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.068968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.069289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.069332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.069618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.069665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.069947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.069987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.070216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.070258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.070445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.070486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.070823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.070862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.071092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.071153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.071339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.071380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.071732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.213 [2024-12-16 12:59:05.071771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.213 qpair failed and we were unable to recover it. 00:37:39.213 [2024-12-16 12:59:05.071926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.071973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.072298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.072340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.072553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.072593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.072930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.072970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.073185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.073230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.073491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.073531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.074003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.074043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.074320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.074362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.074599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.074639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.074943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.074983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.075279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.075321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.075618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.075659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.075902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.075942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.076175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.076215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.076498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.076537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.076722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.076762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.077064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.077103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.077355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.077395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.077628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.077668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.077893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.077933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.078249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.078291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.078542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.078582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.078866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.078905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.079216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.079258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.079538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.079579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.079867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.079908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.080211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.080253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.080578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.080619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.080969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.081008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.081254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.081295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.081590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.081631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.081924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.214 [2024-12-16 12:59:05.081963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.214 qpair failed and we were unable to recover it. 00:37:39.214 [2024-12-16 12:59:05.082273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.082324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.082632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.082673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.082849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.082890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.083172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.083214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.083508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.083549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.083785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.083825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.084043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.084083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.084303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.084344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.084622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.084662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.084962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.085001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.085309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.085350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.085635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.085674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.086014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.086055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.086368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.086408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.086706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.086746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.087059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.087099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.087406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.087447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.087682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.087723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.088074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.088130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.088358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.088398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.088702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.088741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.089071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.089111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.089422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.089460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.089779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.089819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.090099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.090179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.090360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.090403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.090686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.090726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.091139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.091216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.091500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.091536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.091741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.091775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.092041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.092075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.092388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.092423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.092572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.092605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.092884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.092917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.093072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.093103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.093368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.093402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.093537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.093570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.093714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.093747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.093945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.215 [2024-12-16 12:59:05.093978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.215 qpair failed and we were unable to recover it. 00:37:39.215 [2024-12-16 12:59:05.094109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.094150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.094354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.094387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.094603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.094636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.094894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.094927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.095106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.095153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.095354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.095388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.095513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.095545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.095834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.095866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.096130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.096164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.096373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.096407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.096666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.096700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.096887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.096919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.097128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.097162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.097350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.097383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.097590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.097623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.097976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.098021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.098319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.098362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.098655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.098689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.098913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.098946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.099240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.099275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.099535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.099569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.099867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.099899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.100171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.100205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.100363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.100396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.100657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.100690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.100951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.100983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.101240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.101274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.101553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.101586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.101777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.101810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.102003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.102036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.102198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.102232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.102518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.102550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.102761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.102793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.102988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.103019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.103152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.103187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.103371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.103404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.103600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.103633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.103892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.103924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.104131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.104165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.104442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.216 [2024-12-16 12:59:05.104475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.216 qpair failed and we were unable to recover it. 00:37:39.216 [2024-12-16 12:59:05.104693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.104726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.104906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.104939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.105214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.105255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.105479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.105511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.105644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.105676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.105886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.105919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.106250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.106284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.106514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.106547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.106759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.106792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.106996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.107029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.107309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.107343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.107492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.107524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.107656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.107690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.107892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.107924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.108232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.108265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.108534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.108567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.108851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.108884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.109152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.109186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.109485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.109520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.109809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.109841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.110030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.110063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.110369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.110405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.110569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.110602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.110797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.110829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.110960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.110992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.111223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.111257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.111472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.111505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.111641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.111674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.111959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.111992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.112265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.112306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.112536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.112570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.112717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.112749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.112965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.112998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.113263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.113297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.113519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.113553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.113736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.113768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.113971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.114003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.114207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.114242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.114465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.114499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.114759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.217 [2024-12-16 12:59:05.114792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.217 qpair failed and we were unable to recover it. 00:37:39.217 [2024-12-16 12:59:05.114980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.115013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.115238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.115274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.115531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.115565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.115773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.115807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.116081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.116122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.116350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.116383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.116590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.116623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.116942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.116975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.117240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.117273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.117415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.117448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.117654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.117687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.117980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.118013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.118155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.118189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.118394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.118428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.118536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.118570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.118761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.118793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.118993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.119026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.119307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.119342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.119572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.119605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.119924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.119956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.120249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.120284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.120434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.120467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.120611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.120644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.120832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.120865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.121017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.121050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.121351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.121385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.121506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.121539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.121655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.121688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.121813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.121845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.122047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.122080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.122282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.122316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.122452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.122486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.122671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.122704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.122974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.123007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.218 [2024-12-16 12:59:05.123262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.218 [2024-12-16 12:59:05.123296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.218 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.123452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.123485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.123670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.123703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.123918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.123951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.124088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.124127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.124330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.124364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.124599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.124633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.124952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.124984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.125180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.125214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.125422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.125455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.125589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.125622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.125903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.125936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.126070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.126103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.126333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.126366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.126571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.126605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.126821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.126853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.127058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.127091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.127341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.127375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.127577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.127610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.127823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.127856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.128111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.128156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.128286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.128318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.128588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.128621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.128928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.128967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.129154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.129190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.129317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.129350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.129626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.129659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.129931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.129963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.130176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.130210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.130416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.130449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.130655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.130688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.130942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.130975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.131239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.131274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.131458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.131490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.131637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.131670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.131944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.131977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.132220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.219 [2024-12-16 12:59:05.132254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.219 qpair failed and we were unable to recover it. 00:37:39.219 [2024-12-16 12:59:05.132406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.132439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.132647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.132679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.132997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.133031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.133255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.133290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.133422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.133454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.133656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.133689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.133925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.133958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.134074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.134107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.134265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.134298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.134503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.134536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.134684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.134717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.134871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.134904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.135199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.135233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.135462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.135501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.135632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.135665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.135939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.135972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.136135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.136169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.136306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.136339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.136593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.136625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.136921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.136954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.137163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.137197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.137406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.137439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.137589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.137621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.137915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.137949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.138160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.138195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.138348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.138381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.138586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.138619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.138969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.139003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.139235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.139271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.139464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.139498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.220 qpair failed and we were unable to recover it. 00:37:39.220 [2024-12-16 12:59:05.139703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.220 [2024-12-16 12:59:05.139735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.139867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.139900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.140129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.140163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.140437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.140470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.140595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.140627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.140913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.140946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.141134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.141169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.141401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.141434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.141626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.141659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.141869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.141903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.142051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.142084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.142314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.142348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.142545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.142577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.142846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.142878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.143156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.143191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.143407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.143440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.143694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.143727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.143929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.143962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.144237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.144272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.144528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.144561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.144801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.144834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.145068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.145102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.145391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.145424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.145699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.145732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.145993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.146027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.146313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.146348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.146480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.146514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.146712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.146744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.146996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.147029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.147169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.147204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.147397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.147429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.147637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.147670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.147815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.147849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.148130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.148164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.148298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.148330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.148537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.148569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.148796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.148828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.149152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.149186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.149458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.149492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.149619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.149652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.149892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.149925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.150227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.150262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.150486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.150519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.150745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.150778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.150969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.151002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.151184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.151218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.151423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.151456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.151634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.151665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.151858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.151891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.152167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.152202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.152457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.152489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.152713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.152752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.152965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.152997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.153191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.153225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.153421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.153454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.153651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.153685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.153966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.153999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.154290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.154325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.154521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.154554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.154837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.154870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.155173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.155206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.221 [2024-12-16 12:59:05.155415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.221 [2024-12-16 12:59:05.155448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.221 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.155583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.155615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.155831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.155864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.156047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.156080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.156323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.156358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.156660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.156693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.156979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.157012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.157151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.157186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.157463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.157497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.157735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.157769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.158045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.158079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.158280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.158315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.158496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.158529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.158784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.158816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.159075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.159109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.159270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.159304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.159556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.159589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.159836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.159875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.160058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.160091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.160317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.160351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.160548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.160581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.160716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.160750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.160936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.160969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.161238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.161273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.161406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.161439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.161666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.161699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.162003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.162036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.162302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.162336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.162590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.162623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.162929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.162962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.163162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.163196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.163338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.163371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.163575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.163608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.163874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.163906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.164086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.222 [2024-12-16 12:59:05.164127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.222 qpair failed and we were unable to recover it. 00:37:39.222 [2024-12-16 12:59:05.164283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.164316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.164514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.164547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.164829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.164862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.164993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.165025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.165227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.165263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.165489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.165523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.165674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.165706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.165955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.165988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.166240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.166274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.166508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.166547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.166680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.166712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.166992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.167025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.167147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.167181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.167309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.167342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.167536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.167568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.167827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.167859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.168052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.168085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.168303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.168336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.168601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.168634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.168786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.168819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.169005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.169037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.169333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.169368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.169509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.169541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.169847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.169880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.170091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.170134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.170341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.170374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.170566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.170599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.170829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.170862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.170996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.171029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.171306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.171341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.171544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.171578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.171826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.171859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.172133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.172167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.172328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.172361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.172566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.172598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.172730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.172763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.173037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.173070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.173255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.173291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.173545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.173578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.173717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.173749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.174001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.174035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.174363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.174398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.174553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.174585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.174860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.174893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.175080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.175121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.175251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.175285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.175493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.175526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.175734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.175766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.175969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.176002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.176202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.176236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.223 qpair failed and we were unable to recover it. 00:37:39.223 [2024-12-16 12:59:05.176374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.223 [2024-12-16 12:59:05.176407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.176590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.176623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.176902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.176934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.177062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.177095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.177309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.177342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.177556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.177589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.177787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.177821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.177999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.178032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.178290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.178325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.178628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.178661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.178935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.178968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.179183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.179217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.179401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.179434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.179769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.179802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.180060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.180093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.180263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.180297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.180500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.180532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.180821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.180854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.180975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.181008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.181211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.181245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.181402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.181435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.181656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.181690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.181912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.181945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.182201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.182236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.182536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.182569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.182885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.182918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.183194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.183228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.183487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.183527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.183692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.183725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.183924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.183956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.184153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.184187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.184457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.184490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.184691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.184724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.184877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.184910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.185201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.185235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.185387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.185421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.185570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.185602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.185857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.185890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.186090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.186150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.186349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.186382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.186588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.186621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.186905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.186939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.187168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.187202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.187478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.187511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.187802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.187835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.188137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.188172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.188388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.188421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.188565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.188599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.188818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.188851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.189053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.189085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.189262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.189296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.189423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.189456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.189661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.189693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.189975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.190008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.190298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.190338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.190530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.190563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.190779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.190812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.191091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.191135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.191276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.191310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.191503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.191535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.191844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.191877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.192156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.192192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.192421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.192454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.192674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.224 [2024-12-16 12:59:05.192707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.224 qpair failed and we were unable to recover it. 00:37:39.224 [2024-12-16 12:59:05.193024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.193057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.193274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.193308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.193491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.193524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.193709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.193742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.193946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.193978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.194282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.194317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.194580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.194613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.194912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.194944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.195214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.195248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.195520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.195553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.195711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.195744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.195995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.196028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.196297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.196332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.196542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.196575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.196820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.196852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.197035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.197068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.197272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.197306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.197499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.197532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.197718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.197751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.198025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.198059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.198370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.198410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.198558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.198591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.198842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.198876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.199070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.199103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.199232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.199265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.199476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.199508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.199641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.199673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.199868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.199901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.200018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.200052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.200306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.200341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.200617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.200650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.200811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.200845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.201050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.201083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.201285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.201319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.201524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.201557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.201713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.201746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.201954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.201987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.202192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.202226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.202514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.202547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.202847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.202880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.203063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.203096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.203315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.203349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.203502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.203535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.203834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.203867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.203991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.204024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.204173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.204207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.204401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.204434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.204588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.204620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.204906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.204939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.205237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.205271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.205526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.205559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.205879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.205911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.225 [2024-12-16 12:59:05.206138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.225 [2024-12-16 12:59:05.206172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.225 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.206425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.206459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.206602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.206635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.206880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.206915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.207043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.207076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.207334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.207369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.207569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.207608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.207848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.207880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.208005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.208037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.208220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.208255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.208446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.208479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.208621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.208653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.208930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.507 [2024-12-16 12:59:05.208963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.507 qpair failed and we were unable to recover it. 00:37:39.507 [2024-12-16 12:59:05.209225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.209260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.209453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.209485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.209689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.209722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.209924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.209957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.210170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.210203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.210494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.210526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.210842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.210876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.211081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.211121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.211328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.211361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.211510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.211543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.211745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.211777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.211998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.212031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.212218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.212252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.212461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.212494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.212744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.212776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.213050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.213082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.213270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.213304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.213571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.213604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.213794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.213826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.214018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.214051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.214366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.214407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.214604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.214636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.214925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.214956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.215146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.215180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.215373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.215407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.215613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.215646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.215928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.215961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.216144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.216178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.216417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.216450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.216648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.216681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.216794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.216826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.217005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.217038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.217292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.217326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.217556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.217590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.217868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.217901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.218099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.218144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.218287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.218319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.218471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.218503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.218635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.508 [2024-12-16 12:59:05.218667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.508 qpair failed and we were unable to recover it. 00:37:39.508 [2024-12-16 12:59:05.218975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.219007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.219305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.219342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.219553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.219586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.219867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.219900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.220184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.220219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.220502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.220534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.220813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.220846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.221136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.221170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.221352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.221391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.221671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.221704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.221931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.221963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.222224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.222259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.222510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.222542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.222751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.222784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.223055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.223089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.223354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.223387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.223689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.223722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.223916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.223949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.224235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.224271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.224530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.224563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.224848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.224881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.225111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.225175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.225311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.225344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.225549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.225582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.225763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.225795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.225989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.226022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.226292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.226327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.226582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.226615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.226866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.226900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.227083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.227124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.227323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.227356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.227574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.227607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.227907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.227939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.228142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.228176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.228428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.228462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.228659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.228692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.228890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.228923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.229060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.509 [2024-12-16 12:59:05.229093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.509 qpair failed and we were unable to recover it. 00:37:39.509 [2024-12-16 12:59:05.229331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.229365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.229639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.229671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.229889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.229921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.230105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.230158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.230359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.230392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.230663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.230697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.230832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.230864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.231072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.231105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.231247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.231280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.231473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.231506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.231728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.231760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.231952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.231990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.232247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.232282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.232573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.232605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.232886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.232919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.233182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.233216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.233469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.233502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.233715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.233749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.234009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.234041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.234245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.234279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.234469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.234502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.234760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.234794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.234993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.235026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.235154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.235188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.235441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.235475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.235629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.235663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.235888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.235921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.236231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.236266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.236493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.236525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.236729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.236762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.236946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.236979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.237249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.237284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.237414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.237447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.510 [2024-12-16 12:59:05.237598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.510 [2024-12-16 12:59:05.237631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.510 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.237932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.237966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.238163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.238198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.238348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.238381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.238600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.238633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.238970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.239008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.239264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.239299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.239519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.239552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.239796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.239828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.240111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.240156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.240308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.240341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.240560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.240593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.240833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.240865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.241061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.241093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.241307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.241341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.241486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.241519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.241647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.241680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.241865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.241898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.242133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.242167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.242300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.242333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.242554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.242587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.242880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.242913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.243208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.243243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.243502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.243536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.243692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.243725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.244004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.244037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.244320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.244354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.244508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.244540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.244792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.244825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.245044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.245078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.245222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.245256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.245509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.245539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.245809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.245851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.245987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.246018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.246239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.246272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.246478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.246508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.246645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.246676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.246934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.246965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.247246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.511 [2024-12-16 12:59:05.247277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.511 qpair failed and we were unable to recover it. 00:37:39.511 [2024-12-16 12:59:05.247484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.247515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.247733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.247764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.248066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.248096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.248358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.248389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.248598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.248629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.248902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.248932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.249160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.249192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.249339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.249371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.249572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.249604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.249808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.249839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.250041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.250073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.250405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.250437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.250659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.250690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.251026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.251057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.251293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.251328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.251597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.251630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.251931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.251964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.252229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.252263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.252501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.252533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.252688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.252721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.252996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.253029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.253317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.253351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.253575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.253609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.253866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.253899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.254088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.254129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.254346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.254379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.254616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.254649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.254979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.255012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.255280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.255313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.255532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.255566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.255711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.255744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.256022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.256056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.512 qpair failed and we were unable to recover it. 00:37:39.512 [2024-12-16 12:59:05.256356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.512 [2024-12-16 12:59:05.256390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.256599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.256632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.256953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.256986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.257263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.257297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.257514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.257547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.257783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.257816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.257996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.258029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.258284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.258318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.258517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.258549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.258753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.258785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.259059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.259091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.259311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.259344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.259455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.259488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.259712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.259745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.259937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.259970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.260156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.260189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.260429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.260463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.260739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.260772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.260965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.260998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.261272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.261307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.261587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.261619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.262013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.262046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.262310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.262345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.262547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.262580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.262730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.262763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.263025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.263058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.263342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.263375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.263655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.263688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.263934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.263967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.264280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.264321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.264617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.264650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.264967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.265001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.265226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.265260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.265475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.265508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.265658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.265692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.265967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.266000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.266132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.266166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.266322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.266355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.266607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.266640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.266916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.513 [2024-12-16 12:59:05.266949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.513 qpair failed and we were unable to recover it. 00:37:39.513 [2024-12-16 12:59:05.267150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.267185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.267452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.267485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.267691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.267724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.267976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.268009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.268298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.268333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.268529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.268562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.268856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.268888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.269136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.269170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.269447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.269480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.269754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.269787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.270063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.270096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.270314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.270347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.270544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.270577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.270785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.270819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.271021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.271054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.271333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.271368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.271494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.271533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.271810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.271843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.272047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.272080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.272255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.272288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.272417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.272449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.272775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.272809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.273083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.273125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.273345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.273378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.273640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.273672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.273939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.273971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.274124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.274158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.274356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.274389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.274593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.274627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.274862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.274895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.275156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.275190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.275354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.275387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.275515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.275548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.514 [2024-12-16 12:59:05.275822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.514 [2024-12-16 12:59:05.275854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.514 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.276137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.276170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.276358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.276391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.276595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.276628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.276824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.276857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.277090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.277138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.277325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.277358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.277612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.277645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.277952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.277985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.278213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.278247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.278432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.278471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.278652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.278684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.278885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.278917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.279175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.279209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.279413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.279445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.279662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.279694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.279895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.279927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.280122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.280155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.280339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.280372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.280509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.280542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.280750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.280783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.281037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.281070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.281256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.281291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.281503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.281535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.281663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.281696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.282006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.282039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.282294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.282329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.282473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.282507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.282759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.282792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.283024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.283057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.283254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.283288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.283442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.283475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.283779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.283812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.284091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.284134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.284364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.284397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.284614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.284645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.284778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.284811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.284954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.515 [2024-12-16 12:59:05.284986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.515 qpair failed and we were unable to recover it. 00:37:39.515 [2024-12-16 12:59:05.285185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.285219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.285405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.285438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.285594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.285627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.285921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.285953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.286254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.286287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.286535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.286568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.286761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.286793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.287088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.287141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.287296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.287329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.287541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.287574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.287776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.287809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.288085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.288128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.288316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.288348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.288553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.288591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.288844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.288878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.289175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.289210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.289360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.289393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.289668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.289701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.289988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.290021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.290227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.290262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.290538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.290571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.290707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.290740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.290935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.290969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.291173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.291207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.291363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.291396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.291521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.291554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.291702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.291735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.292013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.292048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.292348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.292382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.292597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.292630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.292952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.292985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.293200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.293234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.293502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.293535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.293748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.293781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.294041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.294074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.294356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.294391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.294698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.294731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.294982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.516 [2024-12-16 12:59:05.295015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.516 qpair failed and we were unable to recover it. 00:37:39.516 [2024-12-16 12:59:05.295247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.295282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.295557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.295590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.295885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.295924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.296191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.296226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.296433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.296466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.296743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.296776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.296967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.297000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.297280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.297314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.297573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.297606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.297806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.297840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.298135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.298169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.298318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.298352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.298548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.298585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.298732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.298764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.299072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.299105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.299418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.299451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.299767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.299800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.300047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.300082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.300314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.300347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.300532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.300565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.300883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.300916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.301172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.301207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.301364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.301397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.301603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.301635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.301994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.302027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.302242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.302276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.302553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.302586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.302885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.302919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.303127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.303162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.303358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.303396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.303537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.303570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.303788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.303822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.304100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.304142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.517 [2024-12-16 12:59:05.304348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.517 [2024-12-16 12:59:05.304381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.517 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.304636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.304669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.304976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.305010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.305276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.305311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.305472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.305505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.305654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.305686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.305893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.305926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.306204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.306240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.306442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.306476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.306762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.306794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.307075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.307109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.307355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.307389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.307608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.307641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.307959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.307992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.308211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.308245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.308404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.308438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.308643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.308675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.308935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.308968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.309166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.309201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.309406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.309439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.309714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.309747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.310024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.310058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.310296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.310331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.310531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.310564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.310716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.310749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.310951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.310984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.311215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.311250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.311391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.311424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.311704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.311736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.311984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.312016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.312305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.312339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.312540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.312573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.312875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.312908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.313102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.313147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.313425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.313459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.313645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.313678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.313936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.313969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.518 qpair failed and we were unable to recover it. 00:37:39.518 [2024-12-16 12:59:05.314175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.518 [2024-12-16 12:59:05.314212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.314356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.314389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.314685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.314719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.314971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.315006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.315309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.315343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.315562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.315596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.315830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.315863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.316147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.316181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.316336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.316369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.316549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.316582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.316809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.316842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.317125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.317159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.317384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.317417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.317639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.317672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.317942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.317975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.318232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.318268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.318472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.318505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.318644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.318676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.318965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.318999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.319256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.319290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.319516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.319549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.319814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.319847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.319978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.320011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.320210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.320244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.320370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.320402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.320538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.320570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.320770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.320803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.321000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.321039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.321270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.321304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.321529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.321562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.321760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.321793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.321944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.321976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.322219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.322254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.322431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.322463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.322598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.322631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.322829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.322862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.323145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.323210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.323485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.323518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.323783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.323815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.519 [2024-12-16 12:59:05.324016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.519 [2024-12-16 12:59:05.324048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.519 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.324312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.324346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.324611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.324644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.324900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.324933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.325088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.325132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.325336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.325369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.325597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.325628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.325928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.325959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.326247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.326282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.326477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.326510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.326778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.326811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.327078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.327111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.327245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.327278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.327532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.327566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.327756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.327789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.328076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.328125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.328311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.328343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.328568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.328600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.328870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.328902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.329146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.329179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.329321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.329354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.329608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.329640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.329938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.329971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.330102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.330156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.330374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.330406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.330665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.330698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.330953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.330985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.331174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.331208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.331459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.331492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.331802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.331835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.332020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.332053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.332282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.332316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.332523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.332556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.332738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.332771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.333023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.333056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.333371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.333405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.333706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.333739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.334032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.334065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.334285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.334320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.334513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.334545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.520 qpair failed and we were unable to recover it. 00:37:39.520 [2024-12-16 12:59:05.334797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.520 [2024-12-16 12:59:05.334830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.335038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.335072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.335339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.335378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.335579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.335611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.335799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.335832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.336036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.336068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.336301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.336335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.336628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.336660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.336877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.336909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.337102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.337147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.337454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.337487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.337705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.337738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.337940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.337973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.338247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.338280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.338570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.338604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.338876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.338909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.339200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.339235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.339512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.339546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.339828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.339861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.340148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.340182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.340461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.340493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.340703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.340736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.340869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.340902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.341171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.341205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.341475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.341508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.341800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.341832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.342154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.342188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.342320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.342354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.342602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.342634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.342750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.342783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.342976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.343008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.343207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.521 [2024-12-16 12:59:05.343240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.521 qpair failed and we were unable to recover it. 00:37:39.521 [2024-12-16 12:59:05.343456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.343488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.343688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.343720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.343992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.344024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.344307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.344342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.344621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.344653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.344842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.344873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.345141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.345176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.345425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.345459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.345759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.345791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.346076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.346108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.346422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.346455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.346633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.346673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.346874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.346906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.347154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.347189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.347459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.347492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.347767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.347800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.348000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.348033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.348291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.348326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.348510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.348543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.348822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.348855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.348983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.349016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.349272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.349306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.349490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.349523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.349721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.349755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.350031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.350063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.350380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.350414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.350642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.350676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.350898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.350930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.351206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.351242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.351442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.351475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.351776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.351809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.352097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.352140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.352411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.352443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.352567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.352600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.352872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.352904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.353180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.353213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.353441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.353473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.353750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.353782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.354072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.522 [2024-12-16 12:59:05.354110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.522 qpair failed and we were unable to recover it. 00:37:39.522 [2024-12-16 12:59:05.354387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.354420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.354691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.354722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.355014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.355046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.355260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.355294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.355485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.355519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.355821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.355853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.356053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.356086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.356349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.356383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.356652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.356684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.356979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.357011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.357283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.357317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.357538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.357570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.357839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.357871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.358165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.358200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.358473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.358505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.358738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.358770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.359095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.359140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.359362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.359394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.359646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.359678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.359939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.359971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.360152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.360187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.360388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.360420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.360690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.360725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.360846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.360879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.361005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.361038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.361227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.361260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.361463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.361506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.361785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.361818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.362098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.362149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.362285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.362317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.362598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.362630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.362895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.362928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.363229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.363263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.363488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.363520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.363764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.363797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.523 [2024-12-16 12:59:05.364083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.523 [2024-12-16 12:59:05.364125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.523 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.364394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.364427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.364706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.364739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.364861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.364893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.365093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.365136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.365361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.365394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.365673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.365705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.365987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.366019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.366304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.366338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.366609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.366641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.366929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.366962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.367177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.367211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.367453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.367486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.367768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.367801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.367982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.368014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.368280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.368314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.368594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.368627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.368914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.368946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.369227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.369261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.369548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.369582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.369862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.369895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.370109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.370156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.370431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.370464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.370647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.370680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.370866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.370898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.371151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.371186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.371397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.371429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.371622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.371655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.371907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.371940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.372134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.372168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.372360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.372392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.372641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.372674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.372934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.372968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.373200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.373234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.373424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.373457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.373758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.373791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.374054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.374086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.374279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.374313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.374588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.374621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.524 [2024-12-16 12:59:05.374887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.524 [2024-12-16 12:59:05.374920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.524 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.375168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.375203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.375480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.375513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.375797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.375831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.376111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.376154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.376428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.376461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.376743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.376775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.377055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.377088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.377376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.377410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.377690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.377722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.377929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.377962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.378158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.378193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.378445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.378478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.378775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.378807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.379011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.379043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.379238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.379272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.379547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.379579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.379706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.379739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.380012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.380046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.380228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.380262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.380526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.380565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.380842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.380875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.381082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.381125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.381432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.381465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.381744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.381777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.382056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.382089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.382323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.382357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.382554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.382587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.382863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.382896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.383176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.383211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.383497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.383531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.383682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.383715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.383967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.384000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.384279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.384314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.384578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.384610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.384902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.384936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.385212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.385247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.385476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.385509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.385773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.385806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.525 qpair failed and we were unable to recover it. 00:37:39.525 [2024-12-16 12:59:05.386077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.525 [2024-12-16 12:59:05.386109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.386400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.386435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.386651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.386684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.386910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.386943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.387245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.387280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.387557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.387591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.387872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.387904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.388193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.388227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.388504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.388543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.388824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.388857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.389139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.389174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.389316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.389349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.389470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.389503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.389774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.389807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.390086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.390155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.390439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.390472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.390725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.390758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.390951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.390983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.391258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.391292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.391576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.391608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.391840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.391873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.392180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.392216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.392446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.392480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.392801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.392834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.393110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.393154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.393362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.393394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.393587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.393619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.393819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.393851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.394135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.394168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.394445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.394478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.394698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.394730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.394981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.395013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.395197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.395231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.395486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.395518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.526 qpair failed and we were unable to recover it. 00:37:39.526 [2024-12-16 12:59:05.395784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.526 [2024-12-16 12:59:05.395816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.395959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.395991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.396648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.396688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.396984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.397017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.397294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.397328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.397532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.397565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.397797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.397833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.398111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.398167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.398376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.398409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.398689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.398723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.399007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.399040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.399326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.399361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.399638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.399672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.399955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.399988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.400276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.400310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.400593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.400627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.400906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.400939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.401192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.401226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.401478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.401511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.401767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.401800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.402102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.402144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.402427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.402459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.402737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.402770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.403062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.403095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.403311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.403345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.403650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.403683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.403974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.404007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.404281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.404316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.404538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.404571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.404832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.404866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.405170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.405203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.405463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.405497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.405777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.405810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.406071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.406103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.406405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.406439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.406726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.406760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.407010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.527 [2024-12-16 12:59:05.407042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.527 qpair failed and we were unable to recover it. 00:37:39.527 [2024-12-16 12:59:05.407320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.407355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.407635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.407668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.407938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.407971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.408244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.408279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.408562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.408595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.408878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.408916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.409198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.409232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.409443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.409476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.409675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.409707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.409935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.409967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.410221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.410256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.410513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.410546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.410765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.410798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.411044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.411078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.411292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.411326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.411526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.411558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.411829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.411862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.412057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.412089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.412286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.412319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.412600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.412633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.412899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.412932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.413207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.413242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.413534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.413567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.413785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.413818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.414010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.414042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.414301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.414335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.414450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.414486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.414740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.414773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.415049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.415081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.415311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.415345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.415457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.415489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.415711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.415744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.416042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.416081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.416350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.416384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.416641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.416674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.416976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.417008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.417278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.417313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.417594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.417627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.417828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.417860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.528 [2024-12-16 12:59:05.418155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.528 [2024-12-16 12:59:05.418191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.528 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.418483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.418517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.418771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.418803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.419106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.419150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.419405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.419438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.419658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.419690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.419869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.419901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.420112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.420156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.420306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.420339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.420522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.420554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.420757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.420788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.420972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.421002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.421276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.421310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.421586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.421618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.421911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.421944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.422134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.422167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.422386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.422419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.422600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.422632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.422834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.422865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.423048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.423078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.423368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.423407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.423539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.423571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.423824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.423856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.424065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.424096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.424372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.424404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.424678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.424711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.424917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.424950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.425250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.425283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.425501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.425534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.425717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.425750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.425999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.426032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.426285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.426320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.426443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.426474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.426658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.426689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.426913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.426946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.427156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.427213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.529 [2024-12-16 12:59:05.427499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.529 [2024-12-16 12:59:05.427533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.529 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.427783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.427816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.428084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.428126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.428376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.428408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.428615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.428647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.428845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.428877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.429165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.429199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.429400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.429432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.429706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.429738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.429925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.429958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.430071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.430102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.430401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.430434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.430632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.430665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.430865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.430897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.431176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.431210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.431489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.431522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.431803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.431834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.432031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.432063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.432329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.432363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.432568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.432599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.432790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.432822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.433001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.433033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.433307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.433342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.433617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.433649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.433916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.433949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.434174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.434208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.434449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.434481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.434732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.434764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.435064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.435096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.435319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.435352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.435601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.435633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.435900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.435932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.436132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.436166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.436429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.436461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.436728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.530 [2024-12-16 12:59:05.436761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.530 qpair failed and we were unable to recover it. 00:37:39.530 [2024-12-16 12:59:05.437012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.437045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.437228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.437262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.437542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.437575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.437775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.437808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.438028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.438061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.438353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.438386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.438535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.438568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.438750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.438782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.439059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.439091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.439381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.439414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.439669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.439702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.439962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.439994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.440322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.440355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.440615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.440647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.440847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.440879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.441158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.441192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.441385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.441418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.441671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.441709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.442008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.442041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.442327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.442361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.442639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.442672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.442938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.442971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.443273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.443307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.443585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.443618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.443848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.443880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.444134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.444168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.444446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.444478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.444806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.444838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.445111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.445156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.445441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.445473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.445746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.445778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.445993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.446025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.446232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.446266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.446447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.446479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.446760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.446791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.446974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.447005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.531 qpair failed and we were unable to recover it. 00:37:39.531 [2024-12-16 12:59:05.447276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.531 [2024-12-16 12:59:05.447310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.447601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.447634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.447855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.447887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.448108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.448149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.448433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.448466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.448741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.448772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.449062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.449095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.449378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.449412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.449695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.449733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.449996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.450028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.450298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.450332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.450558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.450590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.450866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.450898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.451042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.451074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.451197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.451231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.451496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.451529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.451804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.451836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.452134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.452167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.452349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.452382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.452655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.452687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.452911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.452943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.453194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.453228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.453430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.453462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.453755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.453787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.454059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.454091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.454377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.454410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.454692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.454724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.455006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.455039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.455303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.455337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.455637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.455670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.455944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.455976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.456164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.456198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.456422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.456454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.456728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.456759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.456957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.456989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.457250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.457283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.457566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.457598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.457880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.457912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.458158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.458191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.532 [2024-12-16 12:59:05.458461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.532 [2024-12-16 12:59:05.458494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.532 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.458696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.458729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.458990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.459022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.459209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.459243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.459505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.459537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.459758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.459790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.460065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.460097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.460390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.460422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.460613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.460645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.460909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.460941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.461228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.461262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.461537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.461570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.461859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.461891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.462170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.462204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.462488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.462521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.462800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.462832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.462978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.463011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.463318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.463352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.463622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.463654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.463874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.463906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.464177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.464210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.464444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.464476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.464669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.464700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.464952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.464985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.465294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.465327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.465612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.465644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.465849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.465881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.466155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.466190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.466480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.466513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.466772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.466804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.467058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.467090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.467326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.467359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.467621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.467653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.467955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.467987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.468237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.468272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.533 [2024-12-16 12:59:05.468487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.533 [2024-12-16 12:59:05.468519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.533 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.468700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.468731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.468990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.469029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.469181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.469215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.469468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.469500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.469705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.469737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.470007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.470038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.470322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.470356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.470636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.470668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.470949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.470981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.471199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.471232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.471509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.471541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.471723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.471755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.471979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.472011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.472284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.472318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.472513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.472546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.472824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.472856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.473134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.473167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.473309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.473342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.473594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.473625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.473902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.473934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.474210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.474245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.474442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.474474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.474748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.474780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.475052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.475084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.475358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.475390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.475601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.475634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.475910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.475942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.476136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.476170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.476429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.476473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.476741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.476773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.477054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.477087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.477372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.477404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.477706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.477738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.477877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.477909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.478158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.534 [2024-12-16 12:59:05.478193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.534 qpair failed and we were unable to recover it. 00:37:39.534 [2024-12-16 12:59:05.478385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.478418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.478610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.478643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.478770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.478802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.478982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.479013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.479237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.479272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.479396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.479428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.479755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.479787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.480057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.480090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.480381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.480414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.480686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.480718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.480845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.480877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.481149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.481182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.481375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.481408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.481709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.481741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.482025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.482057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.482360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.482394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.482658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.482690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.482896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.482929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.483194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.483227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.483517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.483549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.483827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.483865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.484135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.484168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.484464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.484496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.484738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.484770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.485069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.485101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.485380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.485412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.485692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.485724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.485998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.486031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.486329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.486363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.486631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.486663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.486981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.487013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.487293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.487326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.487628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.487661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.487865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.487897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.535 [2024-12-16 12:59:05.488180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.535 [2024-12-16 12:59:05.488213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.535 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.488495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.488527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.488722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.488754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.489025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.489057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.489252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.489286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.489548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.489581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.489778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.489810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.490086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.490146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.490329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.490362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.490559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.490592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.490867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.490899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.491178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.491212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.491441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.491473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.491725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.491757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.492067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.492098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.492364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.492396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.492628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.492660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.492932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.492964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.493244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.493278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.493568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.493600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.493876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.493908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.494147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.494181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.494436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.494469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.494760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.494791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.495068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.495100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.495366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.495398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.495610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.495642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.495855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.495887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.496140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.496175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.496486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.496518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.496728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.496759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.497033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.497066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.497274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.497306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.497510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.497542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.497676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.497709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.497926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.497958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.498165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.498199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.536 [2024-12-16 12:59:05.498472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.536 [2024-12-16 12:59:05.498505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.536 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.498780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.498814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.498998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.499032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.499306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.499341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.499564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.499597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.499882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.499915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.500168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.500202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.500505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.500538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.500825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.500858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.501140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.501174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.501399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.501433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.501628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.501661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.501940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.501972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.502174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.502209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.502485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.502517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.502711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.502744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.503059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.503092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.503334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.503374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.503574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.503606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.503820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.503853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.504138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.504174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.504452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.504486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.504758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.504791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.505002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.505035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.505291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.505326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.505630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.505663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.505921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.505953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.506178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.506212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.506468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.506500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.506752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.506785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.506977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.507010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.507269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.507304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.507603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.507636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.507925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.507959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.508235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.508269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.508476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.508510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.508705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.508738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.508933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.508966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.509167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.537 [2024-12-16 12:59:05.509202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.537 qpair failed and we were unable to recover it. 00:37:39.537 [2024-12-16 12:59:05.509478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.509512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.509790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.509822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.510109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.510159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.510425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.510458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.510639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.510672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.510922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.510961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.511213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.511248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.511431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.511464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.511745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.511778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.512038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.512070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.512305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.512339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.512465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.512498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.512706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.512739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.512989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.513022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.513226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.513261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.513538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.513571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.513766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.513799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.514060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.514093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.514229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.514262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.514548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.514581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.514810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.514843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.515145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.515178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.515329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.515363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.515639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.515672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.515872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.515905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.516165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.516200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.516495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.516528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.516796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.516829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.517125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.517160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.517362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.517394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.517620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.517652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.517929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.517962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.518178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.518213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.518405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.518438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.518713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.518746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.518926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.518960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.519155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.538 [2024-12-16 12:59:05.519188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.538 qpair failed and we were unable to recover it. 00:37:39.538 [2024-12-16 12:59:05.519387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.519420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.519725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.519758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.520020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.520053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.520356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.520390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.520656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.520689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.520943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.520976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.521276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.521311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.521511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.521544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.521825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.521857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.522166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.522201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.522457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.522490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.522792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.522825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.523033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.523065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.523330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.523365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.523558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.523591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.523847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.523880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.524165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.524200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.524500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.524534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.524798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.524831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.525028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.525061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.525275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.525309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.525614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.525647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.525905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.525938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.526165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.526199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.526481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.526514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.526741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.526774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.527026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.527059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.527342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.527376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.527658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.527690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.527977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.528010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.528211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.528245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.528546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.528579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.528844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.528877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.529164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.529198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.529479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.529511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.529789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.529822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.530145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.530186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.530479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.530512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.530634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.530666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.539 qpair failed and we were unable to recover it. 00:37:39.539 [2024-12-16 12:59:05.530868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.539 [2024-12-16 12:59:05.530901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.531157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.531214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.531406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.531439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.531647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.531680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.531882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.531915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.532044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.532075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.532283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.532317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.532543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.532576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.532831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.532863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.533134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.533168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.533423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.533456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.533784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.533816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.534005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.534038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.534183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.534217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.534415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.534448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.534655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.534687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.534907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.534940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.535238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.535272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.535479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.535512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.535789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.535822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.536080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.536121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.536419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.536452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.536741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.536775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.537008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.537040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.537189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.537228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.537504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.537536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.537814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.537847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.538150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.538184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.538451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.538484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.538770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.538803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.539083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.539130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.539400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.539433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.539638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.539670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.539872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.539904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.540 [2024-12-16 12:59:05.540165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.540 [2024-12-16 12:59:05.540200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.540 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.540386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.540418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.540675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.540707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.540978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.541009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.541204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.541239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.541489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.541521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.541794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.541826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.542079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.542111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.542267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.542300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.542489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.542521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.542717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.542749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.543032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.543064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.543371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.543405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.543601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.543633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.543916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.543949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.544164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.544198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.544481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.544513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.544766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.544805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.545085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.545126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.545426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.545459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.545717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.545750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.546044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.546076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.546276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.546310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.546588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.546620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.546842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.546875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.547144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.547177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.547391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.547423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.547705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.547738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.547933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.547965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.548224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.548258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.548477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.548509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.548697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.548729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.549035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.549067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.549382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.549415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.549621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.549653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.549837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.549869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.550049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.550081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.550367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.541 [2024-12-16 12:59:05.550400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.541 qpair failed and we were unable to recover it. 00:37:39.541 [2024-12-16 12:59:05.550664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.550696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.550915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.550948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.551134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.551167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.551365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.551397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.551685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.551716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.551923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.551956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.552163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.552197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.552466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.542 [2024-12-16 12:59:05.552498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.542 qpair failed and we were unable to recover it. 00:37:39.542 [2024-12-16 12:59:05.552710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.822 [2024-12-16 12:59:05.552742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.822 qpair failed and we were unable to recover it. 00:37:39.822 [2024-12-16 12:59:05.552955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.822 [2024-12-16 12:59:05.552987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.822 qpair failed and we were unable to recover it. 00:37:39.822 [2024-12-16 12:59:05.553302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.822 [2024-12-16 12:59:05.553337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.822 qpair failed and we were unable to recover it. 00:37:39.822 [2024-12-16 12:59:05.553477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.822 [2024-12-16 12:59:05.553510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.822 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.553693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.553725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.554021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.554053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.554360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.554394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.554691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.554722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.554922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.554954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.555197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.555232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.555435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.555467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.555733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.555765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.556022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.556056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.556364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.556397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.556579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.556612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.556912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.556944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.557232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.557265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.557471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.557503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.557759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.557791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.558092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.558134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.558340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.558373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.558646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.558678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.558860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.558893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.559096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.559138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.559396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.823 [2024-12-16 12:59:05.559428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.823 qpair failed and we were unable to recover it. 00:37:39.823 [2024-12-16 12:59:05.559636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.559668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.559937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.559969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.560263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.560296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.560575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.560607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.560803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.560835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.561097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.561142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.561329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.561362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.561577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.561610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.561792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.561823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.562090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.562149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.562334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.562367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.562644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.562676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.562952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.562984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.563279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.563313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.563587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.563625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.563758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.563789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.563980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.564013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.564143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.564176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.564399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.564430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.564750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.564783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.564930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.564961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.565216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.565250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.824 [2024-12-16 12:59:05.565431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.824 [2024-12-16 12:59:05.565462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.824 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.565656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.565687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.565830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.565863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.566152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.566187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.566480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.566513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.566803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.566836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.567125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.567159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.567435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.567468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.567745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.567777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.567974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.568006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.568271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.568305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.568579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.568612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.568905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.568937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.569213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.569247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.569443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.569476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.569750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.569782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.569935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.569967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.570162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.570196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.570386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.570418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.570544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.570582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.570833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.570865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.571055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.571087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.571398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.571432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.571693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.571725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.571947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.571980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.572283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.825 [2024-12-16 12:59:05.572317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.825 qpair failed and we were unable to recover it. 00:37:39.825 [2024-12-16 12:59:05.572578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.572610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.572816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.572848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.573126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.573160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.573379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.573412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.573690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.573722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.573994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.574026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.574170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.574204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.574404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.574437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.574712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.574744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.575031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.575063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.575346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.575380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.575659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.575692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.575897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.575929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.576228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.576262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.576523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.576556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.576783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.576815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.577039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.577072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.577261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.577296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.577603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.577635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.577763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.577795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.578065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.578098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.578372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.578405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.578625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.578658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.826 [2024-12-16 12:59:05.578932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.826 [2024-12-16 12:59:05.578964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.826 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.579247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.579281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.579423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.579455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.579708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.579740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.579929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.579961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.580214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.580248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.580499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.580531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.580756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.580788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.580973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.581005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.581186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.581219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.581422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.581455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.581645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.581679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.581953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.581986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.582187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.582222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.582479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.582511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.582768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.582800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.583100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.583144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.583418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.583451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.583733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.583766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.584052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.584084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.584362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.584395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.584685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.584717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.584997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.585029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.585245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.585280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.585556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.585588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.827 [2024-12-16 12:59:05.585854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.827 [2024-12-16 12:59:05.585886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.827 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.586186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.586220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.586502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.586534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.586822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.586854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.587034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.587066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.587341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.587375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.587656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.587688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.587973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.588005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.588263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.588297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.588598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.588630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.588855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.588888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.589044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.589076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.589339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.589373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.589672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.589717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.589992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.590025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.590311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.590346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.590624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.590657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.590915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.590947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.591155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.591189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.591460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.591493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.591796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.591828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.592139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.592173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.592386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.592418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.592638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.592670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.592857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.592888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.593162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.593195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.593470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.593502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.828 qpair failed and we were unable to recover it. 00:37:39.828 [2024-12-16 12:59:05.593732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.828 [2024-12-16 12:59:05.593764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.594037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.594069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.594288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.594322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.594595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.594627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.594907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.594939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.595204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.595239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.595454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.595486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.595672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.595704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.595888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.595921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.596207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.596240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.596520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.596552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.596811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.596843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.597093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.597135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.597435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.597473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.597679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.597712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.597988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.598021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.598303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.598336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.598552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.598584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.598839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.598874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.599151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.599186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.599468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.599501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.599784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.599817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.600019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.600052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.829 qpair failed and we were unable to recover it. 00:37:39.829 [2024-12-16 12:59:05.600318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.829 [2024-12-16 12:59:05.600352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.600613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.600647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.600825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.600857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.601149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.601183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.601392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.601425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.601622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.601654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.601929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.601962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.602180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.602214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.602326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.602359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.602558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.602592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.602875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.602907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.603188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.603223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.603411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.603444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.603703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.603735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.603989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.604021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.604323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.604357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.604568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.604601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.604806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.604845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.605049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.605082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.605368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.605402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.605602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.605635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.605909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.605943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.606245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.606279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.606418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.606451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.606655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.830 [2024-12-16 12:59:05.606688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.830 qpair failed and we were unable to recover it. 00:37:39.830 [2024-12-16 12:59:05.606943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.606975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.607238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.607272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.607468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.607501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.607766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.607798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.608073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.608106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.608395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.608428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.608637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.608671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.608938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.608970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.609172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.609206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.609465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.609498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.609774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.609807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.609987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.610020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.610164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.610199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.610475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.610508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.610658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.610690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.610882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.610914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.611046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.611079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.611392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.611426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.611677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.611710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.611985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.612018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.612222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.612257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.612559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.612592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.612874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.612907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.613091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.613133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.613412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.613445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.831 [2024-12-16 12:59:05.613706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.831 [2024-12-16 12:59:05.613739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.831 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.614014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.614047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.614337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.614371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.614578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.614610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.614884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.614917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.615220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.615254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.615517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.615550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.615758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.615791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.616000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.616033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.616313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.616348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.616549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.616583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.616779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.616812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.617018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.617050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.617258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.617293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.617512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.617545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.617845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.617878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.618146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.618180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.618463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.618496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.618777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.618810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.618988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.619020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.619299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.619333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.619553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.619586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.619870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.619903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.620136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.620170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.620450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.620484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.620688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.620720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.620971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.621004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.621276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.621311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.621587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.621620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.621910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.621942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.622221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.622257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.622543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.622576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.622852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.622885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.623099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.623140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.623420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.623454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.832 qpair failed and we were unable to recover it. 00:37:39.832 [2024-12-16 12:59:05.623729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.832 [2024-12-16 12:59:05.623769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.624048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.624080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.624359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.624393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.624674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.624707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.624994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.625026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.625246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.625280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.625476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.625508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.625771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.625803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.626062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.626095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.626300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.626334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.626595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.626628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.626756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.626789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.627070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.627104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.627409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.627442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.627721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.627754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.628016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.628050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.628348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.628383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.628587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.628620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.628900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.628932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.629138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.629173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.629452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.629485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.629678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.629710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.629921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.629954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.630138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.630172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.630459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.630491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.630772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.630804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.631109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.631164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.631457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.631498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.631753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.631785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.632088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.632130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.632423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.632456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.632659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.632692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.632882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.632915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.633109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.633156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.633420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.633453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.633705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.633737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.634036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.634069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.634344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.634378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.634572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.634605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.833 [2024-12-16 12:59:05.634881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.833 [2024-12-16 12:59:05.634914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.833 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.635093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.635146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.635393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.635427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.635703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.635736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.635987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.636020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.636275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.636309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.636506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.636539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.636793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.636825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.637076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.637107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.637299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.637332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.637530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.637563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.637835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.637868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.638044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.638077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.638232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.638265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.638446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.638479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.638735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.638766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.639051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.639084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.639418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.639491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.639768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.639812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.640098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.640158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.640471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.640511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.640816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.640855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.641101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.641155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.641464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.641505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.641761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.641801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.642094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.642146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.642473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.642512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.642837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.642878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.643188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.643231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.643464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.643502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.643791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.643830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.644039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.644080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.644384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.644425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.644687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.644727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.645018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.645058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.645378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.645419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.645736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.645775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.646006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.646047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.646363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.646404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.646712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.834 [2024-12-16 12:59:05.646751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.834 qpair failed and we were unable to recover it. 00:37:39.834 [2024-12-16 12:59:05.647058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.647097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.647342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.647383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.647665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.647713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.648020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.648059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.648282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.648324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.648623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.648663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.648953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.648993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.649293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.649334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.649636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.649675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.649906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.649945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.650228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.650269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.650563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.650604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.650831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.650870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.651138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.651180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.651492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.651531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.651837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.651876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.652127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.652170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.652471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.652510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.652818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.652858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.653205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.653246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.653484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.653524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.653756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.653795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.654001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.654040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.654382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.654425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.654736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.654776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.655082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.655130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.655439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.655478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.655788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.655827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.656051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.656090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.656272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.656317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.656543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.656582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.656820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.656859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.657130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.657173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.657381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.657422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.835 [2024-12-16 12:59:05.657734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.835 [2024-12-16 12:59:05.657774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.835 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.658014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.658055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.658375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.658416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.658673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.658712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.659008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.659048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.659390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.659431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.659727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.659767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.660074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.660127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.660435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.660482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.660790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.660830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.661125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.661167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.661476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.661517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.661822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.661862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.662168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.662210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.662427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.662466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.662680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.662719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.663048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.663088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.663430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.663469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.663789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.663829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.664111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.664165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.664471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.664511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.664743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.664783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.665045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.665086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.665326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.665366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.665672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.665711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.665994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.666035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.666331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.666373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.666674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.666713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.667018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.667058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.667304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.667346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.667651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.667691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.667948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.667988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.668202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.668244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.668526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.668565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.668878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.668918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.669243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.669285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.669590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.669629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.669933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.669972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.670278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.836 [2024-12-16 12:59:05.670320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.836 qpair failed and we were unable to recover it. 00:37:39.836 [2024-12-16 12:59:05.670460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.670506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.670804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.670845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.671140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.671182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.671483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.671522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.671827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.671866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.672204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.672246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.672489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.672529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.672840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.672880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.673164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.673205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.673529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.673577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.673908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.673949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.674271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.674311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.674616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.674656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.674970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.675010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.675315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.675357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.675645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.675685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.675916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.675955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.676261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.676303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.676587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.676628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.676952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.676992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.677297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.677338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.677644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.677684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.677928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.677967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.678199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.678240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.678532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.678573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.678873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.678912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.679217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.679258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.679557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.679597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.679942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.679981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.680304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.680345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.680652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.680693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.681003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.681043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.681277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.681318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.681620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.681660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.682011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.682051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.682349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.682390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.682708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.682748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.683002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.683042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.683401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.683442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.837 qpair failed and we were unable to recover it. 00:37:39.837 [2024-12-16 12:59:05.683745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.837 [2024-12-16 12:59:05.683784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.684011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.684051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.684378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.684418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.684663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.684702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.685011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.685050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.685399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.685439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.685752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.685791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.686094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.686151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.686385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.686425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.686677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.686716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.687037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.687085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.687425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.687465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.687779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.687818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.688108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.688161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.688467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.688507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.688850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.688890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.689188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.689229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.689463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.689503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.689721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.689759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.689940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.689984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.690246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.690288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.690595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.690634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.690923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.690962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.691286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.691328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.691665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.691705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.692031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.692071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.692407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.692449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.692744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.692784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.693088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.693143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.693373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.693414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.693716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.693755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.693967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.694007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.694339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.694380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.694615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.694655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.694961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.695001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.695348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.695390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.695681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.695721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.696018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.696058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.696311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.696352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.838 [2024-12-16 12:59:05.696668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.838 [2024-12-16 12:59:05.696707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.838 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.697026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.697065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.697385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.697425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.697727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.697768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.698050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.698090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.698371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.698410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.698712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.698752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.699082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.699147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.699364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.699404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.699710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.699750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.699976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.700016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.700343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.700392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.700659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.700699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.701003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.701043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.701295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.701336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.701636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.701675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.701911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.701951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.702267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.702308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.702563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.702603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.702905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.702944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.703227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.703268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.703600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.703641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.703944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.703983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.704322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.704365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.704683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.704723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.705037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.705076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.705312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.705353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.705666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.705706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.705989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.706028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.706266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.706306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.706615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.706656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.706911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.706950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.707248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.707289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.707575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.707614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.707919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.839 [2024-12-16 12:59:05.707958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.839 qpair failed and we were unable to recover it. 00:37:39.839 [2024-12-16 12:59:05.708264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.708306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.708616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.708656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.708892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.708932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.709264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.709307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.709632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.709671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.710007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.710047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.710349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.710390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.710705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.710744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.710971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.711011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.711355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.711396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.711707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.711746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.712073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.712126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.712427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.712466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.712707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.712747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.713059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.713098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.713413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.713454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.713736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.713784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.714030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.714069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.714402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.714443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.714752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.714792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.715075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.715146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.715456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.715495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.715811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.715852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.716087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.716145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.716445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.716485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.716719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.716759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.717075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.717127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.717389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.717429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.717636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.717677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.717905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.717945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.718199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.718241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.718546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.718587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.718886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.718926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.719235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.719276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.719583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.719622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.719904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.719944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.720249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.720290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.720544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.720584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.720878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.840 [2024-12-16 12:59:05.720918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.840 qpair failed and we were unable to recover it. 00:37:39.840 [2024-12-16 12:59:05.721175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.721216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.721512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.721551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.721860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.721900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.722228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.722270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.722517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.722557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.722878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.722918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.723223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.723264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.723505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.723544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.723875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.723914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.724215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.724257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.724505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.724544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.724850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.724890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.725197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.725239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.725520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.725559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.725888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.725928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.726230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.726272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.726524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.726564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.726847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.726894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.727213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.727255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.727555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.727594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.727903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.727942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.728166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.728208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.728491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.728529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.728747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.728786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.729128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.729170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.729468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.729507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.729803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.729844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.730150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.730191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.730498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.730538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.730762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.730801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.731162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.731204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.731535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.731574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.731874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.731914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.732222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.732264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.732571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.732611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.732917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.732958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.733184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.733227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.733452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.733491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.733701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.733740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.841 qpair failed and we were unable to recover it. 00:37:39.841 [2024-12-16 12:59:05.734023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.841 [2024-12-16 12:59:05.734063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.734385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.734426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.734727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.734766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.735002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.735041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.735359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.735399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.735702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.735780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.736019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.736055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.736349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.736385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.736595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.736628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.736822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.736854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.737111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.737157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.737449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.737482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.737692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.737724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.737999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.738031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.738226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.738261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.738456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.738488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.738688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.738721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.738916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.738949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.739177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.739234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.739520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.739554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.739828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.739862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.740154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.740188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.740421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.740454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.740729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.740763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.741048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.741080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.741361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.741396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.741697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.741731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.742015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.742048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.742329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.742363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.742618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.742652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.742835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.742868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.743159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.743193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.743469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.743515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.743791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.743824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.744131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.744165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.744427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.744460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.744597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.744630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.744827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.744859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.745142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.745175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.745389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.842 [2024-12-16 12:59:05.745422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.842 qpair failed and we were unable to recover it. 00:37:39.842 [2024-12-16 12:59:05.745628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.745661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.745961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.745993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.746273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.746307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.746584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.746618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.746891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.746923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.747139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.747174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.747433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.747467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.747673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.747706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.747892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.747925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.748198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.748232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.748501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.748534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.748752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.748784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.748969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.749002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.749258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.749292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.749571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.749604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.749927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.749960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.750263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.750296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.750478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.750510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.750774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.750807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.751034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.751073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.751310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.751344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.751617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.751649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.751925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.751957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.752250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.752285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.752543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.752576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.752852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.752884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.753066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.753099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.753378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.753411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.753679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.753711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.753911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.753945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.754141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.754174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.754369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.754402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.754697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.754730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.755018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.755050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.755324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.755360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.755562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.755594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.755883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.755915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.756147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.756183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.756364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.756397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.756660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.843 [2024-12-16 12:59:05.756694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.843 qpair failed and we were unable to recover it. 00:37:39.843 [2024-12-16 12:59:05.756896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.756929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.757040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.757072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.757274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.757307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.757510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.757544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.757820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.757853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.758103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.758144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.758425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.758464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.758653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.758685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.758938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.758970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.759251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.759286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.759485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.759517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.759748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.759781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.759985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.760018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.760296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.760330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.760628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.760661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.760929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.760962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.761243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.761277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.761558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.761590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.761770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.761803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.762076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.762109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.762421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.762455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.762656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.762689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.762876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.762909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.763213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.763248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.763454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.763486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.763708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.763741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.763923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.763956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.844 qpair failed and we were unable to recover it. 00:37:39.844 [2024-12-16 12:59:05.764235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.844 [2024-12-16 12:59:05.764270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.764466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.764498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.764693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.764726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.765001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.765034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.765239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.765273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.765469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.765501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.765717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.765750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.765958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.765990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.766191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.766224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.766476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.766509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.766813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.766846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.767137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.767170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.767353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.767386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.767567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.767600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.767852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.767884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.767999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.768031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.768251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.768285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.768562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.768595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.768820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.768852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.769035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.769067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.769338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.769373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.769582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.769614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.769901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.769933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.770085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.770127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.770244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.770279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.770551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.770584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.770902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.770935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.771210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.771246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.845 [2024-12-16 12:59:05.771399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.845 [2024-12-16 12:59:05.771432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.845 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.771659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.771693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.771946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.771979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.772133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.772168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.772422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.772455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.772745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.772777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.773055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.773088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.773306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.773341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.773617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.773649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.773859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.773892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.774071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.774105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.774351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.774384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.774687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.774720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.774985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.775017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.775272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.775306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.775582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.775615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.775756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.775789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.776042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.776075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.776374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.776408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.776702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.776740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.776961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.776994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.777278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.777311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.777506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.777538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.777747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.777779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.777979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.778011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.778258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.778292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.778594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.778627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.778928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.778962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.846 [2024-12-16 12:59:05.779249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.846 [2024-12-16 12:59:05.779284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.846 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.779468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.779501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.779784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.779817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.779966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.779999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.780195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.780229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.780421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.780454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.780652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.780686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.780939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.780971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.781099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.781143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.781418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.781452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.781748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.781780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.781994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.782026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.782346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.782380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.782587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.782620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.782899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.782932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.783140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.783175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.783302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.783334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.783460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.783491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.783702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.783739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.783937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.783968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.784234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.784268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.784563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.784595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.784802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.784834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.785138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.785172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.785380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.785412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.785590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.785623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.785850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.785884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.786152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.786188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.847 [2024-12-16 12:59:05.786456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.847 [2024-12-16 12:59:05.786490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.847 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.786778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.786812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.787013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.787046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.787329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.787363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.787645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.787679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.787869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.787902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.788180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.788214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.788493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.788526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.788811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.788843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.789135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.789168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.789370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.789404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.789699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.789732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.789992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.790024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.790315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.790348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.790556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.790589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.790852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.790884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.791110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.791155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.791339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.791373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.791581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.791614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.791909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.791942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.792211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.792246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.792468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.792500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.792778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.792811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.793009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.793042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.793239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.793274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.793525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.793558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.793864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.793897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.794164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.794198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.794456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.848 [2024-12-16 12:59:05.794489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.848 qpair failed and we were unable to recover it. 00:37:39.848 [2024-12-16 12:59:05.794694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.794726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.795001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.795034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.795365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.795401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.795581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.795613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.795913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.795946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.796214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.796250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.796443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.796476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.796601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.796634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.796846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.796879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.797140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.797174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.797473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.797506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.797724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.797757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.797951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.797984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.798221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.798255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.798472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.798506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.798782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.798815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.799105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.799165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.799350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.799384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.799585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.799617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.799832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.799865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.800168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.800204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.800356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.800389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.800594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.800627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.800823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.800857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.801155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.801190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.801394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.801427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.801687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.801721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.801929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.801963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.849 [2024-12-16 12:59:05.802172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.849 [2024-12-16 12:59:05.802206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.849 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.802421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.802461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.802712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.802746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.802988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.803021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.803340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.803374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.803515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.803549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.803832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.803865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.804169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.804204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.804465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.804497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.804644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.804678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.804931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.804964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.805180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.805216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.805518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.805552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.805814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.805847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.806049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.806081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.806320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.806353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.806606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.806639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.806936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.806968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.807290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.807324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.807522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.807555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.807838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.807871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.808083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.808129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.808406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.808441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.850 [2024-12-16 12:59:05.808624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.850 [2024-12-16 12:59:05.808657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.850 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.808881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.808914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.809100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.809147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.809402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.809435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.809635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.809668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.809805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.809843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.810028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.810060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.810282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.810316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.810520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.810552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.810847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.810880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.811084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.811131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.811387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.811420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.811618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.811651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.811867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.811900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.812101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.812148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.812428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.812462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.812595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.812627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.812882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.812916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.813171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.813205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.813335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.813367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.813548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.813580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.813841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.813875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.814136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.814171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.814448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.814481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.814681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.814714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.814991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.815024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.815265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.815299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.815481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.815515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.851 [2024-12-16 12:59:05.815711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.851 [2024-12-16 12:59:05.815744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.851 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.815884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.815915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.816123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.816157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.816290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.816324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.816521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.816560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.816739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.816771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.817044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.817077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.817393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.817427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.817680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.817713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.817996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.818029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.818315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.818349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.818627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.818660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.818948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.818981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.819210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.819244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.819563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.819596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.819779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.819812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.819953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.819985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.820245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.820279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.820563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.820596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.820910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.820944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.821203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.821237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.821495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.821528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.821828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.821861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.822171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.822205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.822412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.822444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.822637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.822670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.822869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.822902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.823178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.823212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.852 [2024-12-16 12:59:05.823409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.852 [2024-12-16 12:59:05.823441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.852 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.823648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.823681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.823957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.823990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.824171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.824205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.824480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.824514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.824779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.824812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.825094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.825139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.825345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.825378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.825631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.825663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.825841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.825874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.826007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.826040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.826223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.826257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.826463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.826495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.826700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.826731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.826926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.826958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.827153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.827187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.827395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.827427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.827566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.827600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.827894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.827927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.828145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.828180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.828364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.828397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.828590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.828623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.828915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.828947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.829185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.829219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.829480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.829514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.829812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.829845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.830138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.830172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.830364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.830397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.830582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.830615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.830753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.853 [2024-12-16 12:59:05.830785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.853 qpair failed and we were unable to recover it. 00:37:39.853 [2024-12-16 12:59:05.830900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.830933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.831208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.831242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.831517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.831550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.831844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.831877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.832153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.832187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.832410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.832443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.832623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.832655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.832797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.832829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.833017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.833050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.833329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.833364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.833626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.833659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.833933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.833966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.834254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.834289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.834503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.834535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.834713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.834751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.834939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.834973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.835245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.835279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.835548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.835581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.835903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.835936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.836208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.836242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.836442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.836475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.836688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.836721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.837008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.837041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.837238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.837272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.837576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.837609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.837752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.837785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.837985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.838018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.854 [2024-12-16 12:59:05.838292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.854 [2024-12-16 12:59:05.838327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.854 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.838451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.838484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.838662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.838694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.838965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.838997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.839202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.839237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.839488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.839520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.839716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.839749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.839943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.839974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.840181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.840214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.840419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.840451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.840744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.840777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.841052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.841084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.841305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.841339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.841531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.841563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.841685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.841723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.841923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.841956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.842158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.842193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.842447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.842480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.842694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.842727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.842911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.842943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.843152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.843210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.843500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.843533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.843722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.843753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.843884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.843916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.844105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.844151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.844354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.844385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.844694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.844727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.844933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.855 [2024-12-16 12:59:05.844965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.855 qpair failed and we were unable to recover it. 00:37:39.855 [2024-12-16 12:59:05.845159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.845194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.845397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.845430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.845711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.845744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.846010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.846042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.846268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.846302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.846441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.846474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.846600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.846631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.846751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.846784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.846919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.846950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.847141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.847174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.847379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.847413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.847540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.847572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.847753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.847784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.847965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.847997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.848125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.848158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.848269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.848303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.848483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.848515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.848721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.848752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.848934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.848965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.849074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.856 [2024-12-16 12:59:05.849105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.856 qpair failed and we were unable to recover it. 00:37:39.856 [2024-12-16 12:59:05.849312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.849346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.849464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.849494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.849686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.849717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.849994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.850027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.850227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.850261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.850384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.850416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.850549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.850580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.850772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.850805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.850915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.850949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.851201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.851234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.851434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.851467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.851725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.851757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.851994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.852026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.852232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.852266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.852447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.852478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.852730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.852762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.852898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.852929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.853238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.853271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.853379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.853411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.853678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.853710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.853925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.853957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.854171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.854206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.854400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.854433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.854642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.854675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.854852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.854884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.855127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.855160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.855359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.855391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.855501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.857 [2024-12-16 12:59:05.855534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.857 qpair failed and we were unable to recover it. 00:37:39.857 [2024-12-16 12:59:05.855782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.855816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.856008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.856040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.856344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.856378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.856525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.856558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.856773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.856805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.856955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.856987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.857106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.857156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.857352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.857385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.857581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.857614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.857804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.857836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.857954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.857984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.858248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.858282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.858481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.858512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.858740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.858772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.858992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.859024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.859142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.859174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.859312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.859344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.859622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.859654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.859865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.859897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.860166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.860200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.860334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.860367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.860497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.860528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.860714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.860746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.860853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.860886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.861072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.861104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.861244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.861276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.861476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.861510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.858 [2024-12-16 12:59:05.861636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.858 [2024-12-16 12:59:05.861669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.858 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.861870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.861902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.862182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.862216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.862342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.862372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.862512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.862544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.862765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.862798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.862947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.862985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.863188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.863221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.863473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.863506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.863687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.863719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.863920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.863953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.864267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.864300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.864576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.864608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.864792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.864824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.864951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.864982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.865096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.865135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.865291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.865321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.865517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.865548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.865660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.865689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.865963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.865995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.866218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.866251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.866431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.866461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.866642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.866674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.866803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.866836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:39.859 [2024-12-16 12:59:05.866980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.859 [2024-12-16 12:59:05.867012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:39.859 qpair failed and we were unable to recover it. 00:37:40.178 [2024-12-16 12:59:05.867230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.178 [2024-12-16 12:59:05.867263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.178 qpair failed and we were unable to recover it. 00:37:40.178 [2024-12-16 12:59:05.867375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.178 [2024-12-16 12:59:05.867407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.178 qpair failed and we were unable to recover it. 00:37:40.178 [2024-12-16 12:59:05.867590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.867623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.867738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.867780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.867910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.867941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.868129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.868163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.868303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.868336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.868605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.868637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.868835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.868874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.869023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.869055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.869335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.869371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.869585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.869617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.869812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.869844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.869985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.870016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.870231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.870265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.870450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.870482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.870608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.870652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.870935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.870971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.871192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.871230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.871435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.871467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.871720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.871753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.871952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.871984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.872251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.872285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.872464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.872496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.872683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.872716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.872910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.872942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.873137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.873171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.873301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.873332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.873515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.873546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.873724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.873756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.873948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.873979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.874167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.874200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.874345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.874376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.874591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.874624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.874810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.874843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.875033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.875066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.875286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.875321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.875505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.875536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.179 [2024-12-16 12:59:05.875659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.179 [2024-12-16 12:59:05.875690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.179 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.875834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.875866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.876068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.876100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.876373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.876406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.876529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.876559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.876756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.876788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.876968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.876999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.877155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.877189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.877321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.877352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.877602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.877635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.877870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.877903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.878036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.878067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.878207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.878238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.878517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.878549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.878765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.878796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.878997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.879029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.879164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.879198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.879414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.879446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.879652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.879685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.879869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.879900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.880008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.880039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.880217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.880250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.880438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.880469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.880654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.880685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.880822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.880852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.880965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.880998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.881249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.881282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.881405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.881436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.881620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.881651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.881759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.881791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.881964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.881995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.882192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.882225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.882432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.882463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.882756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.882790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.882940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.882970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.883160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.883195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.883373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.883406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.883513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.883545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.180 qpair failed and we were unable to recover it. 00:37:40.180 [2024-12-16 12:59:05.883790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.180 [2024-12-16 12:59:05.883829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.884019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.884051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.884262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.884294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.884431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.884462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.884648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.884680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.884814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.884847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.885025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.885056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.885273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.885305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.885443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.885473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.885679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.885711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.885896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.885928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.886105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.886148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.886270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.886300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.886482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.886514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.886717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.886749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.886998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.887029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.887285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.887319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.887429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.887461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.887659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.887691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.887949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.887981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.888191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.888225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.888412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.888443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.888667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.888698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.888843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.888874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.889070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.889102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.889305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.889337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.889471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.889503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.889611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.889648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.889766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.889798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.890067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.890099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.890344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.890378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.890585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.890616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.890749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.890781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.890959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.890990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.891103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.891149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.891330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.891362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.891555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.891587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.891764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.891794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.891991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.892022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.181 qpair failed and we were unable to recover it. 00:37:40.181 [2024-12-16 12:59:05.892284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.181 [2024-12-16 12:59:05.892317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.892512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.892543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.892670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.892700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.892826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.892858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.892967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.892998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.893141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.893174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.893283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.893313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.893445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.893476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.893655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.893688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.893867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.893899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.894028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.894058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.894392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.894425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.894544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.894575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.894769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.894801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.895051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.895082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.895301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.895334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.895532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.895564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.895816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.895848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.895960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.895991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.896286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.896320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.896512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.896544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.896656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.896688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.896815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.896845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.897039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.897070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.897289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.897322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.897508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.897540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.897788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.897819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.898019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.898051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.898294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.898328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.898645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.898677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.898960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.899001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.899195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.899228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.899429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.899461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.899708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.899741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.899864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.899894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.900066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.900097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.900399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.900432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.900705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.900737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.182 qpair failed and we were unable to recover it. 00:37:40.182 [2024-12-16 12:59:05.900946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.182 [2024-12-16 12:59:05.900977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.901172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.901206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.901383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.901415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.901605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.901636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.901768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.901799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.901981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.902013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.902140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.902174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.902440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.902472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.902600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.902630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.902877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.902908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.903153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.903185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.903297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.903328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.903445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.903477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.903747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.903778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.904038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.904070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.904268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.904301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.904413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.904444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.904567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.904598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.904870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.904906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.905081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.905125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.905250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.905281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.905400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.905431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.905622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.905652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.905826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.905856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.906036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.906068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.906363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.906397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.906684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.906715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.906848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.906880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.907137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.907171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.907372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.907403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.907704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.907735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.907919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.183 [2024-12-16 12:59:05.907952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.183 qpair failed and we were unable to recover it. 00:37:40.183 [2024-12-16 12:59:05.908150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.908184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.908458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.908490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.908671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.908703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.908901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.908932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.909061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.909091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.909307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.909338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.909518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.909548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.909682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.909714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.909822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.909853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.909978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.910009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.910255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.910289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.910464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.910495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.910610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.910641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.910759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.910796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.910971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.911003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.911276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.911308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.911431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.911461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.911644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.911675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.911933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.911963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.912187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.912220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.912398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.912429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.912604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.912637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.912889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.912921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.913122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.913155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.913330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.913362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.913494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.913526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.913645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.913675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.913871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.913904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.914160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.914194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.914378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.914409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.914611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.914643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.914911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.914942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.915186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.915220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.915393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.915425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.915646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.915677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.915960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.915992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.916180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.916214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.916458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.916490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.916664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.184 [2024-12-16 12:59:05.916696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.184 qpair failed and we were unable to recover it. 00:37:40.184 [2024-12-16 12:59:05.916872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.916905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.917096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.917145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.917344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.917375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.917642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.917674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.917890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.917920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.918058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.918089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.918319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.918368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.918489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.918520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.918711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.918742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.919009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.919041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.919254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.919287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.919423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.919454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.919635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.919665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.919799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.919831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.920016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.920047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.920248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.920281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.920458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.920489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.920608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.920639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.920831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.920863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.920986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.921016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.921131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.921164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.921365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.921397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.921640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.921671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.921856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.921888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.922090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.922153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.922352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.922383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.922649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.922682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.922859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.922889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.923061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.923092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.923237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.923268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.923445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.923477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.923716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.923747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.924016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.924048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.924165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.924198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.924323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.924354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.924597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.924629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.924732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.924763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.925030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.925062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.925246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.925279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.185 [2024-12-16 12:59:05.925382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.185 [2024-12-16 12:59:05.925417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.185 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.925603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15960b0 is same with the state(6) to be set 00:37:40.186 [2024-12-16 12:59:05.926036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.926108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.926274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.926311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.926469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.926503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.926684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.926715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.926930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.926962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.927139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.927172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.927372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.927404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.927587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.927618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.927813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.927845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.928058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.928089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.928248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.928279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.928575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.928608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.928743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.928773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.928901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.928931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.929062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.929094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.929362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.929394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.929594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.929625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.929734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.929764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.929990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.930021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.930213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.930246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.930423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.930454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.930648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.930679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.930786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.930817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.931011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.931041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.931216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.931249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.931513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.931545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.931783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.931815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.932077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.932109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.932254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.932293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.932538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.932571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.932834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.932865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.932982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.933014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.933143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.933175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.933360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.933391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.933591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.933623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.933736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.933767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.933886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.933917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.934087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.934126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.186 qpair failed and we were unable to recover it. 00:37:40.186 [2024-12-16 12:59:05.934320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.186 [2024-12-16 12:59:05.934352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.934535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.934567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.934741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.934772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.934959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.934991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.935270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.935304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.935433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.935464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.935583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.935614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.935740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.935771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.935965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.935995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.936258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.936290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.936493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.936525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.936719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.936750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.936945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.936976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.937083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.937122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.937243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.937276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.937381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.937413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.937529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.937561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.937675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.937706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.937947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.937979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.938244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.938278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.938386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.938416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.938518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.938550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.938671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.938703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.938896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.938927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.939055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.939095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.939223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.939255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.939520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.939551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.939669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.939700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.939800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.939831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.940016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.940047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.940271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.940310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.940411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.940442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.940568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.940600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.940772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.940804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.940924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.940956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.941225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.941258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.941512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.941543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.941786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.941817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.187 qpair failed and we were unable to recover it. 00:37:40.187 [2024-12-16 12:59:05.941931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.187 [2024-12-16 12:59:05.941963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.942164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.942197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.942334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.942365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.942575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.942607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.942842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.942874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.943007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.943038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.943217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.943251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.943389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.943421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.943546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.943578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.943764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.943796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.943918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.943949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.944129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.944162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.944291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.944322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.944434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.944466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.944579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.944610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.944858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.944890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.945070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.945102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.945321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.945352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.945463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.945495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.945686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.945768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.946054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.946091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.946396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.946429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.946605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.946637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.946760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.946792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.946918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.946950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.947151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.947185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.947383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.947417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.947524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.947556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.947745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.947777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.947903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.947935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.948132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.188 [2024-12-16 12:59:05.948165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.188 qpair failed and we were unable to recover it. 00:37:40.188 [2024-12-16 12:59:05.948278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.948311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.948454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.948494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.948674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.948706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.948837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.948870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.949041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.949072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.949221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.949257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.949451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.949483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.949600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.949632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.949826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.949858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.949964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.949996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.950172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.950205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.950399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.950431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.950602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.950634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.950879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.950910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.951197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.951230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.951424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.951457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.951577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.951608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.951724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.951755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.951877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.951909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.952097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.952138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.952397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.952429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.952671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.952703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.952888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.952919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.953047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.953078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.953311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.953347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.953452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.953484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.953722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.953754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.954016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.954047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.954261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.954295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.954407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.954438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.954560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.954592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.954715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.954746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.954983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.955016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.955224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.955258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.955501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.955533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.955711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.955742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.956009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.956041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.956168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.956201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.956385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.189 [2024-12-16 12:59:05.956416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.189 qpair failed and we were unable to recover it. 00:37:40.189 [2024-12-16 12:59:05.956605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.956637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.956814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.956846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.957023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.957061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.957277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.957314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.957529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.957561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.957683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.957715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.957852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.957884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.958167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.958201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.958389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.958420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.958610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.958642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.958837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.958871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.959068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.959099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.959293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.959325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.959451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.959483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.959666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.959697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.959870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.959902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.960181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.960215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.960347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.960378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.960546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.960578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.960843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.960874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.961014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.961046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.961222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.961258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.961445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.961477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.961598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.961630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.961802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.961834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.962024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.962056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.962266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.962300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.962404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.962437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.962614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.962646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.962837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.962870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.963004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.963037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.963241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.963274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.963446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.963478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.963613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.963645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.963910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.963941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.964077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.964109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.964293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.964326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.190 qpair failed and we were unable to recover it. 00:37:40.190 [2024-12-16 12:59:05.964501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.190 [2024-12-16 12:59:05.964532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.964648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.964680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.964879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.964911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.965083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.965130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.965429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.965464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.965675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.965713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.965921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.965953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.966077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.966109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.966413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.966445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.966685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.966716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.966900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.966932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.967155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.967189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.967397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.967429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.967713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.967745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.967866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.967897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.968091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.968130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.968308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.968341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.968521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.968553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.968677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.968710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.968957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.968989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.969091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.969136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.969263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.969297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.969608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.969640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.969879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.969910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.970024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.970056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.970207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.970239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.970504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.970537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.970705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.970754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.970996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.971027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.971197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.971230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.971491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.971523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.971708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.971740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.971926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.971995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.972261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.972298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.972428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.972460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.972643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.972676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.972916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.972948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.973194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.973227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.191 [2024-12-16 12:59:05.973440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.191 [2024-12-16 12:59:05.973471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.191 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.973733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.973765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.973954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.973986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.974107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.974150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.974345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.974378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.974503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.974535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.974644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.974676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.974860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.974901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.975188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.975221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.975400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.975432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.975615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.975646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.975844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.975875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.976075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.976107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.976291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.976322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.976453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.976484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.976683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.976714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.976821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.976852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.977123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.977156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.977273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.977305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.977422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.977454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.977690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.977722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.977870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.977903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.978074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.978105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.978287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.978320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.978513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.978544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.978739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.978770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.978979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.979011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.979144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.979176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.979294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.979326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.979466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.979498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.979607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.979638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.979876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.979908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.980029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.192 [2024-12-16 12:59:05.980061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.192 qpair failed and we were unable to recover it. 00:37:40.192 [2024-12-16 12:59:05.980255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.980287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.980468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.980540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.980738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.980773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.980894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.980926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.981105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.981159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.981397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.981429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.981537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.981568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.981758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.981792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.981923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.981954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.982151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.982183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.982289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.982320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.982444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.982475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.982645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.982677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.982804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.982836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.983086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.983128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.983242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.983274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.983451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.983482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.983662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.983695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.983888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.983921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.984038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.984068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.984265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.984299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.984543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.984575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.984767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.984800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.984971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.985003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.985241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.985274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.985384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.985414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.985590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.985620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.985854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.985886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.986012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.986050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.986239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.986272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.986457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.986490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.986666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.986698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.986819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.986849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.987014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.987046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.987300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.987333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.987529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.987561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.987748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.987779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.987902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.987932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.988038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.193 [2024-12-16 12:59:05.988068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.193 qpair failed and we were unable to recover it. 00:37:40.193 [2024-12-16 12:59:05.988265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.988298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.988512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.988543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.988716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.988746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.988927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.988958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.989138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.989171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.989357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.989389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.989504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.989534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.989658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.989689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.989803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.989834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.990094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.990138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.990248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.990279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.990399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.990432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.990536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.990568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.990741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.990773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.990946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.990978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.991095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.991147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.991257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.991296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.991472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.991503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.991618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.991650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.991831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.991863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.992040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.992071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.992282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.992315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.992492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.992524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.992642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.992674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.992856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.992888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.993011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.993043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.993176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.993209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.993312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.993345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.993471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.993504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.993718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.993749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.993972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.994005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.994177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.994211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.994336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.994368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.994558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.994590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.994774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.994807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.994997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.995029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.995208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.995241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.995371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.194 [2024-12-16 12:59:05.995403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.194 qpair failed and we were unable to recover it. 00:37:40.194 [2024-12-16 12:59:05.995517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.995549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.995723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.995755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.995864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.995895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.996133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.996168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.996348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.996380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.996645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.996676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.996801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.996833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.997021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.997052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.997192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.997225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.997397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.997430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.997606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.997638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.997889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.997921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.998128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.998162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.998331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.998364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.998491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.998522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.998693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.998725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.998860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.998892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.999011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.999048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.999267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.999300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.999421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.999454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.999631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.999662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.999846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:05.999878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:05.999984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.000016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.000188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.000222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.000343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.000375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.000547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.000578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.000779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.000811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.000936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.000968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.001079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.001111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.001249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.001282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.001457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.001488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.001613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.001644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.001742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.001773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.001913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.001945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.002069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.002101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.002226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.002258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.002371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.002405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.002541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.002571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.002793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.002825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.195 [2024-12-16 12:59:06.002956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.195 [2024-12-16 12:59:06.002988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.195 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.003273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.003307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.003422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.003454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.003565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.003596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.003703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.003734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.004003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.004036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.004153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.004188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.004440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.004477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.004583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.004615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.004727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.004760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.004890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.004922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.005098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.005141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.005382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.005413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.005596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.005627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.005733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.005764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.005955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.005986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.006122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.006155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.006265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.006297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.006406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.006439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.006611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.006643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.006819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.006850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.007140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.007174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.007278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.007310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.007492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.007522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.007694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.007725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.007832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.007864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.008001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.008032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.008204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.008238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.008342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.008374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.008560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.008592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.008709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.008740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.008913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.008945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.009123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.009155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.009282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.009314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.009575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.009614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.009852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.009883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.009992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.010023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.196 qpair failed and we were unable to recover it. 00:37:40.196 [2024-12-16 12:59:06.010140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.196 [2024-12-16 12:59:06.010174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.010288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.010319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.010433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.010465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.010638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.010670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.010851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.010882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.010998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.011030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.011296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.011329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.011434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.011466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.011643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.011674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.011791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.011822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.011997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.012028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.012241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.012276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.012520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.012552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.012722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.012753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.012868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.012899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.013002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.013033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.013146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.013179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.013389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.013422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.013613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.013644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.013815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.013847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.013948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.013980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.014148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.014180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.014369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.014401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.014572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.014603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.014787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.014824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.014940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.014972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.015085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.015147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.015410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.015442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.015618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.015650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.197 [2024-12-16 12:59:06.015766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.197 [2024-12-16 12:59:06.015797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.197 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.015908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.015940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.016152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.016185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.016396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.016427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.016600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.016633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.016807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.016839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.017013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.017044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.017236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.017269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.017389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.017421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.017536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.017568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.017818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.017849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.018020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.018058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.018168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.018199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.018337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.018368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.018539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.018570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.018747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.018778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.018987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.019019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.019270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.019303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.019474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.019506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.019645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.019676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.019781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.019812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.020024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.020055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.020202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.020236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.020504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.020537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.020642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.020673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.020771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.020802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.020977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.021008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.021133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.021165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.021350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.021382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.021550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.021582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.021710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.021741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.021976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.022007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.022134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.022167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.022417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.022448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.022558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.022590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.022762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.022794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.022905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.022937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.023198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.023232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.023336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.023368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.198 qpair failed and we were unable to recover it. 00:37:40.198 [2024-12-16 12:59:06.023550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.198 [2024-12-16 12:59:06.023582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.023705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.023736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.023924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.023955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.024073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.024104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.024291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.024323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.024431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.024463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.024728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.024760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.024860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.024891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.024998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.025029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.025296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.025330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.025547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.025578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.025755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.025787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.025974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.026006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.026203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.026235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.026352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.026383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.026507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.026539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.026654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.026685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.026928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.026959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.027133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.027167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.027332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.027363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.027481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.027513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.027691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.027721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.027959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.027990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.028121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.028153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.028321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.028358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.028529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.028561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.028745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.028776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.028962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.028992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.029109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.029148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.029270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.029302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.029408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.029439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.029708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.029739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.029910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.029942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.030124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.030156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.030261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.030293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.030488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.030520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.030691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.199 [2024-12-16 12:59:06.030722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.199 qpair failed and we were unable to recover it. 00:37:40.199 [2024-12-16 12:59:06.030983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.031015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.031203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.031236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.031344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.031377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.031565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.031596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.031773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.031804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.031908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.031939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.032080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.032111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.032223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.032254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.032462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.032494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.032597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.032629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.032814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.032846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.033104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.033142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.033355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.033387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.033645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.033676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.033797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.033834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.033959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.033990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.034241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.034273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.034390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.034423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.034619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.034651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.034765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.034796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.034931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.034963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.035233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.035266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.035390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.035422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.035736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.035768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.035976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.036007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.036272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.036306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.036580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.036612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.036731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.036762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.036953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.036986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.037198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.037231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.037400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.037432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.037573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.037604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.037801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.037833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.038024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.038055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.038229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.038262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.038515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.038546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.038717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.038748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.038987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.039019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.039283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.039315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.200 qpair failed and we were unable to recover it. 00:37:40.200 [2024-12-16 12:59:06.039484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.200 [2024-12-16 12:59:06.039515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.039745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.039777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.039895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.039926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.040189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.040222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.040392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.040424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.040604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.040635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.040770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.040802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.041004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.041036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.041169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.041202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.041394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.041425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.041692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.041724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.041983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.042014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.042125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.042157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.042326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.042358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.042539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.042570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.042683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.042715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.042839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.042872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.043110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.043168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.043406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.043439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.043720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.043751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.043927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.043958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.044167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.044200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.044374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.044404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.044615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.044646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.044765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.044796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.044966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.044997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.045103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.045144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.045420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.045453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.045636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.045668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.045857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.045889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.045998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.046029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.046270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.046303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.046440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.046472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.046717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.046748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.046923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.046954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.201 [2024-12-16 12:59:06.047150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.201 [2024-12-16 12:59:06.047183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.201 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.047310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.047341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.047527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.047559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.047798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.047830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.048026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.048057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.048308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.048342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.048512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.048544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.048664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.048694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.048884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.048921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.049137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.049171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.049436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.049467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.049582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.049613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.049751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.049783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.049898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.049929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.050053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.050085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.050280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.050313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.050566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.050597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.050834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.050865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.051049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.051081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.051255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.051302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.051485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.051518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.051705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.051737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.051876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.051908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.052124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.052158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.052267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.052299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.052515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.052547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.052673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.052705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.052890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.052922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.053125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.053157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.053346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.053378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.053551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.053583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.053820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.053851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.053968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.054000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.054248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.054282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.202 [2024-12-16 12:59:06.054406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.202 [2024-12-16 12:59:06.054437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.202 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.054702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.054740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.054845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.054880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.055093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.055132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.055240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.055272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.055478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.055510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.055716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.055748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.055873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.055905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.056021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.056053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.056168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.056201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.056466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.056498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.056689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.056722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.056858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.056889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.057130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.057163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.057281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.057313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.057532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.057564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.057806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.057838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.058009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.058041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.058302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.058336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.058507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.058540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.058772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.058805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.058944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.058976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.059104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.059165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.059360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.059392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.059581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.059613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.059856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.059887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.060067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.060099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.060229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.060261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.060500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.060537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.060712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.060743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.061013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.061044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.061174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.061206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.061387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.061418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.061592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.061624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.061735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.061766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.061979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.062011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.062179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.062213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.062498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.062529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.062629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.062660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.203 [2024-12-16 12:59:06.062832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.203 [2024-12-16 12:59:06.062864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.203 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.062978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.063009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.063106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.063147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.063348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.063380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.063571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.063603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.063719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.063750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.063937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.063969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.064071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.064103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.064305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.064337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.064543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.064575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.064748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.064779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.065036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.065067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.065311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.065343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.065545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.065577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.065829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.065860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.066063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.066095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.066279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.066312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.066531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.066563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.066784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.066815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.066990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.067021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.067190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.067224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.067414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.067446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.067557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.067588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.067706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.067737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.067908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.067939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.068154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.068187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.068309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.068341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.068512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.068543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.068818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.068849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.069088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.069128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.069250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.069283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.069550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.069581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.069751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.069783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.069968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.070001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.070170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.070203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.070440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.070471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.070650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.070681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.070876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.070907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.071038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.071070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.204 [2024-12-16 12:59:06.071254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.204 [2024-12-16 12:59:06.071286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.204 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.071411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.071441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.071680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.071713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.071897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.071929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.072100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.072140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.072356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.072388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.072566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.072598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.072786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.072818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.072933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.072965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.073270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.073303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.073550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.073582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.073820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.073852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.074064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.074095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.074227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.074259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.074467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.074499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.074633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.074665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.074776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.074807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.074928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.074959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.075150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.075189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.075403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.075434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.075708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.075739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.075911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.075943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.076122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.076155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.076418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.076450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.076623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.076654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.076866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.076898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.077085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.077127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.077251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.077282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.077388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.077419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.077587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.077618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.077786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.077818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.078000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.078031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.078145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.078178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.078420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.078451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.078685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.078717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.078886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.078918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.079108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.079168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.079290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.079321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.205 [2024-12-16 12:59:06.079512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.205 [2024-12-16 12:59:06.079543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.205 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.079672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.079703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.079985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.080017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.080193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.080226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.080327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.080358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.080543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.080574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.080856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.080888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.081074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.081124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.081364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.081396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.081534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.081566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.081736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.081768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.081942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.081973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.082167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.082200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.082461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.082492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.082636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.082667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.082914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.082946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.083141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.083174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.083387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.083418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.083600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.083631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.083798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.083829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.084000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.084032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.084225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.084267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.084475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.084507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.084673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.084704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.084948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.084980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.085188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.085221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.085396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.085428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.085568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.085599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.085704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.085736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.085907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.085939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.086128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.086162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.086349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.086381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.086560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.086592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.086763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.206 [2024-12-16 12:59:06.086795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.206 qpair failed and we were unable to recover it. 00:37:40.206 [2024-12-16 12:59:06.087031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.087062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.087219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.087252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.087493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.087524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.087776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.087807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.087977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.088008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.088268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.088301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.088476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.088508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.088638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.088670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.088787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.088819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.089001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.089033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.089207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.089240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.089412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.089443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.089657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.089689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.089809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.089840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.090085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.090127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.090232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.090264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.090445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.090476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.090668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.090699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.090808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.090840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.090941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.090971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.091076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.091108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.091246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.091278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.091449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.091480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.091653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.091685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.091866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.091897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.092081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.092112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.092261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.092293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.092479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.092510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.092648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.092680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.092796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.092828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.092934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.092966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.093204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.093237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.093359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.093390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.093495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.093526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.093701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.093732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.093998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.207 [2024-12-16 12:59:06.094030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.207 qpair failed and we were unable to recover it. 00:37:40.207 [2024-12-16 12:59:06.094232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.094265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.094499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.094530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.094709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.094741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.094975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.095007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.095202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.095235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.095343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.095378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.095487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.095518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.095735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.095766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.095893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.095924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.096168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.096201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.096368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.096399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.096589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.096621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.096806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.096838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.097011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.097043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.097154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.097186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.097371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.097402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.097509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.097541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.097800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.097831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.098069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.098101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.098369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.098402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.098579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.098610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.098794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.098825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.098943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.098974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.099155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.099191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.099366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.099398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.099642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.099674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.099942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.099974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.100190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.100224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.100401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.100433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.100614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.100646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.100859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.100891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.101006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.101038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.101274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.101314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.101487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.101518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.101706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.101738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.101936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.101968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.102150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.102187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.102320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.102353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.102469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.102501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.208 qpair failed and we were unable to recover it. 00:37:40.208 [2024-12-16 12:59:06.102760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.208 [2024-12-16 12:59:06.102792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.103054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.103086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.103282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.103347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.103694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.103762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.103990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.104026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.104216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.104250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.104441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.104474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.104657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.104689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.104823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.104855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.105127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.105161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.105282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.105314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.105554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.105586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.105762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.105794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.105894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.105925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.106168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.106205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.106384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.106417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.106604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.106637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.106847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.106880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.107064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.107097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.107402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.107434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.107581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.107617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.107858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.107889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.108027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.108060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.108249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.108282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.108455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.108487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.108700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.108732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.108901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.108933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.109045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.109077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.109275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.109308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.109545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.109577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.109787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.109818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.109998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.110030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.110216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.110250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.110355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.110385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.110582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.110615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.110788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.110821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.110947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.110979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.209 [2024-12-16 12:59:06.111088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.209 [2024-12-16 12:59:06.111137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.209 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.111320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.111352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.111532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.111564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.111821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.111852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.112157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.112190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.112438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.112470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.112600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.112631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.112811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.112844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.113024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.113055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.113237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.113270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.113453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.113491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.113623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.113655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.113852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.113884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.114070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.114101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.114381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.114413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.114611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.114643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.114825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.114857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.115032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.115065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.115207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.115241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.115478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.115509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.115748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.115779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.116043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.116076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.116346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.116379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.116481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.116511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.116648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.116680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.116919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.116950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.117136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.117169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.117269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.117301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.117505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.117537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.117710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.117741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.117927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.117959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.118140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.118174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.118374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.118406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.118574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.118606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.118817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.118849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.119037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.119068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.119234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.119268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.119527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.119565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.119862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.119893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.210 [2024-12-16 12:59:06.120148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.210 [2024-12-16 12:59:06.120182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.210 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.120304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.120335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.120573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.120604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.120863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.120895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.121084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.121124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.121374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.121407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.121583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.121615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.121746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.121778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.121920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.121952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.122134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.122167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.122405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.122438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.122564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.122595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.122861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.122894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.123112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.123166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.123351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.123383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.123596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.123627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.123845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.123876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.124053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.124084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.124353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.124386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.124660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.124691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.124931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.124963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.125088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.125129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.125268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.125300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.125489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.125521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.125641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.125673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.125846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.125877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.126063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.126095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.126294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.126327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.126500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.126532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.126651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.126682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.126807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.126838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.127017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.127049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.127163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.127197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.211 [2024-12-16 12:59:06.127322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.211 [2024-12-16 12:59:06.127354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.211 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.127476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.127508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.127747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.127779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.127959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.127991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.128246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.128280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.128473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.128511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.128758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.128790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.128895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.128928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.129101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.129143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.129314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.129346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.129513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.129546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.129801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.129833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.130068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.130100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.130216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.130248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.130362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.130394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.130564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.130596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.130801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.130833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.131045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.131077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.131275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.131308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.131412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.131443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.131655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.131687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.131872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.131904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.132092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.132133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.132346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.132378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.132647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.132679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.132946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.132978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.133241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.133274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.133482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.133515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.133689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.133721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.133960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.133993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.134197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.134230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.134414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.134446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.134567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.134598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.134771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.134809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.134915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.134947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.135150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.135183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.135439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.135471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.135676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.135708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.135810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.135840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.135945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.212 [2024-12-16 12:59:06.135977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.212 qpair failed and we were unable to recover it. 00:37:40.212 [2024-12-16 12:59:06.136175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.136208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.136401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.136433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.136603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.136635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.136752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.136783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.136962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.136994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.137131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.137164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.137378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.137410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.137522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.137555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.137688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.137720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.137842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.137874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.138066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.138098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.138389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.138422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.138687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.138719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.138929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.138962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.139203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.139237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.139428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.139459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.139658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.139690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.139953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.139985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.140199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.140232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.140406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.140438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.140541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.140579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.140765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.140796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.140930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.140962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.141168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.141201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.141462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.141494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.141660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.141691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.141874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.141907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.142126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.142159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.142431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.142463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.142638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.142670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.142939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.142970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.143256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.143290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.143552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.143584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.143692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.143725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.143925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.143958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.144166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.144200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.144374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.144406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.144644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.144675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.144915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.144948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.213 [2024-12-16 12:59:06.145211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.213 [2024-12-16 12:59:06.145244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.213 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.145435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.145467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.145640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.145672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.145860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.145893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.146097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.146137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.146390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.146422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.146623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.146655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.146776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.146808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.147000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.147038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.147224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.147259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.147374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.147406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.147575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.147607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.147846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.147878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.147980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.148010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.148184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.148218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.148485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.148518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.148781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.148814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.148982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.149014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.149198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.149232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.149420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.149451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.149654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.149685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.149811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.149844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.150023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.150056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.150325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.150358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.150531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.150563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.150686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.150717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.150908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.150940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.151140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.151174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.151415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.151446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.151567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.151599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.151796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.151828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.151999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.152031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.152268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.152302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.152497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.152529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.152664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.152696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.152983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.153015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.153226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.153260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.153447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.153479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.153676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.153708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.153949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.214 [2024-12-16 12:59:06.153981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.214 qpair failed and we were unable to recover it. 00:37:40.214 [2024-12-16 12:59:06.154110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.154151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.154333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.154365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.154602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.154634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.154823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.154855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.154978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.155010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.155344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.155380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.155517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.155550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.155791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.155823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.156030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.156062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.156192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.156226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.156365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.156396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.156655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.156687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.156791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.156822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.157064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.157096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.157287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.157320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.157599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.157631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.157871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.157904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.158086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.158126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.158325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.158358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.158506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.158539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.158859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.158891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.159070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.159102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.159221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.159254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.159451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.159483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.159624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.159656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.159830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.159863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.159962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.159994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.160257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.160291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.160460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.160493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.160688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.160720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.160933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.160965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.161187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.161220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.161498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.161530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.215 qpair failed and we were unable to recover it. 00:37:40.215 [2024-12-16 12:59:06.161784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.215 [2024-12-16 12:59:06.161816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.161988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.162020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.162132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.162166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.162338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.162375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.162636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.162667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.162908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.162940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.163137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.163171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.163436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.163467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.163641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.163674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.163796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.163828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.163959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.163990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.164186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.164219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.164480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.164513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.164629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.164661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.164849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.164881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.165070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.165102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.165307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.165340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.165530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.165562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.165832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.165864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.166034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.166066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.166333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.166366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.166472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.166504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.166740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.166773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.166959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.166991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.167174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.167208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.167468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.167500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.167684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.167716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.167976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.168008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.168195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.168229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.168413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.168444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.168560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.168597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.168781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.168813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.168985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.169017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.169256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.169290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.169472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.169503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.169710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.169742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.169919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.169951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.170142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.170176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.216 [2024-12-16 12:59:06.170436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.216 [2024-12-16 12:59:06.170469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.216 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.170585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.170618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.170744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.170775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.170971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.171003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.171220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.171253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.171515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.171547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.171735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.171767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.171986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.172019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.172235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.172268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.172404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.172437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.172632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.172664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.172846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.172878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.172998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.173030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.173223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.173257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.173496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.173528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.173762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.173793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.173984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.174016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.174266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.174300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.174468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.174500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.174678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.174710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.174839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.174871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.175061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.175092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.175223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.175256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.175502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.175534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.175730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.175761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.175974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.176005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.176182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.176215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.176386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.176418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.176595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.176626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.176742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.176774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.177038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.177070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.177273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.177306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.177494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.177526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.177832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.177902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.178136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.178174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.178385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.178418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.178609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.178641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.178831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.178863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.179085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.179129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.217 [2024-12-16 12:59:06.179416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.217 [2024-12-16 12:59:06.179447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.217 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.179644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.179676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.179865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.179896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.180063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.180093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.180379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.180414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.180595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.180627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.180886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.180917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.181019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.181060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.181320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.181354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.181484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.181515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.181693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.181725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.181840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.181872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.182053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.182085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.182344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.182405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.182647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.182688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.182814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.182850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.183027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.183059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.183275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.183309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.183507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.183540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.183657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.183689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.183860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.183892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.184022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.184054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.184244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.184277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.184465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.184496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.184691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.184723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.184836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.184867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.185048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.185079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.185211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.185246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.185364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.185396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.185566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.185598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.185837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.185870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.186086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.186128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.186316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.186348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.186529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.186560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.186736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.186774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.186945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.186977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.187122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.218 [2024-12-16 12:59:06.187155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.218 qpair failed and we were unable to recover it. 00:37:40.218 [2024-12-16 12:59:06.187350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.187382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.187580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.187612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.187819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.187850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.188033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.188065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.188320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.188354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.188564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.188595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.188771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.188803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.189009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.189042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.189280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.189314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.189501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.189533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.189742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.189774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.189891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.189923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.190038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.190071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.190319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.190352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.190601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.190633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.190898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.190930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.191215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.191248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.191440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.191473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.191655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.191688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.191926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.191958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.192083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.192122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.192388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.192420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.192656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.192688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.192816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.192848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.193027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.193059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.193276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.193310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.193572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.193604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.193726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.193758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.193938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.193969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.194074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.194104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.194251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.194285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.194388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.194420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.194602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.194634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.194916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.194948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.195187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.195220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.219 [2024-12-16 12:59:06.195404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.219 [2024-12-16 12:59:06.195436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.219 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.195662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.195694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.195812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.195844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.196026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.196059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.196305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.196339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.196518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.196550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.196788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.196820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.197010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.197043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.197217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.197250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.197371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.197403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.197583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.197615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.197873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.197905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.198055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.198087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.198282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.198314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.198449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.198482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.198745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.198777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.198894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.198926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.199085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.199127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.199445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.199479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.199689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.199722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.199926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.199958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.200182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.200216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.200347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.200380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.200502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.200535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.200772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.200804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.200991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.201023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.201207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.201241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.201458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.201490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.201664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.201696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.201864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.201898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.202022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.202059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.202195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.202228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.202400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.202432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.202611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.202643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.220 [2024-12-16 12:59:06.202819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.220 [2024-12-16 12:59:06.202851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.220 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.202986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.203018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.203210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.203243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.203499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.203531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.203643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.203675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.203795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.203827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.203928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.203960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.204140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.204173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.204288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.204320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.204569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.204601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.204802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.204834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.205015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.205047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.205177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.205211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.205385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.205418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.205567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.205599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.205722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.205754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.206005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.206037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.206276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.206310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.206431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.206463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.206637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.206668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.206793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.206825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.207019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.207051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.207180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.207214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.207408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.207445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.207690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.207722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.207917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.207950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.208186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.208219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.208395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.208427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.208665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.208696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.208880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.208912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.209027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.209060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.209245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.209278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.209466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.209498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.209612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.209644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.209820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.209852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.210030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.210062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.210254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.210287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.210394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.210427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.210600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.210632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.221 [2024-12-16 12:59:06.210812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.221 [2024-12-16 12:59:06.210844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.221 qpair failed and we were unable to recover it. 00:37:40.222 [2024-12-16 12:59:06.210950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.222 [2024-12-16 12:59:06.210982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.222 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.211174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.211208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.211345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.211377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.211492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.211529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.211733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.211764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.211895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.211927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.212137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.212171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.212296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.212327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.212443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.212475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.212596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.212628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.212740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.212773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.212952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.212984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.213090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.213142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.213340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.213372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.213564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.213596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.213861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.213893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.214137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.214171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.214341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.214373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.214551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.214583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.214781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.214812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.215003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.215034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.215138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.215172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.215361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.215393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.215502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.215534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.215761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.215830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.215990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.216027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.216213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.216248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.520 [2024-12-16 12:59:06.216374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.520 [2024-12-16 12:59:06.216406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.520 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.216534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.216567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.216744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.216776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.216885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.216917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.217050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.217081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.217336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.217370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.217603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.217635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.217766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.217798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.217965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.217997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.218127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.218160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.218343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.218384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.218628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.218660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.218841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.218874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.219044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.219075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.219287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.219321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.219455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.219488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.219751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.219782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.219964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.219996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.220234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.220268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.220380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.220412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.220523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.220555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.220785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.220818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.221012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.221044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.221227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.221261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.221474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.221507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.221639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.221671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.221855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.221887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.222132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.222166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.222291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.222323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.222563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.222596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.222710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.222742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.222859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.222890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.223079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.223111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.223304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.223337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.223545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.223577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.223744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.223775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.223954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.223986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.224132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.224167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.224344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.224376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.521 qpair failed and we were unable to recover it. 00:37:40.521 [2024-12-16 12:59:06.224570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.521 [2024-12-16 12:59:06.224602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.224717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.224750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.224929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.224960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.225094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.225138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.225399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.225431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.225670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.225702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.225975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.226007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.226180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.226214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.226335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.226367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.226622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.226654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.226848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.226880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.227052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.227089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.227202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.227233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.227335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.227367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.227492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.227524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.227639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.227672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.227914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.227946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.228138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.228171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.228286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.228319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.228493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.228523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.228780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.228810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.228915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.228946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.229145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.229181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.229441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.229472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.229645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.229676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.229789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.229820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.229994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.230025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.230197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.230230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.230342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.230372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.230542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.230573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.230688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.230719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.230996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.231026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.231235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.231267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.231482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.231514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.231751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.231781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.231950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.522 [2024-12-16 12:59:06.231980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.522 qpair failed and we were unable to recover it. 00:37:40.522 [2024-12-16 12:59:06.232159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.232192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.232304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.232335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.232460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.232492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.232666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.232696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.232931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.232961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.233199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.233231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.233351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.233381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.233564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.233596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.233775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.233806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.233997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.234027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.234203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.234235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.234436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.234467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.234579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.234609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.234738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.234768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.235009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.235040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.235212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.235249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.235473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.235504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.235676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.235707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.235891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.235921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.236096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.236137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.236378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.236409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.236530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.236561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.236728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.236759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.236930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.236961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.237081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.237132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.237263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.237294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.237465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.237496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.237667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.237698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.237864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.237895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.238157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.238189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.238432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.238462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.238663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.238694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.238949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.238979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.239105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.239142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.239395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.239426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.239606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.239637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.239900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.523 [2024-12-16 12:59:06.239930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.523 qpair failed and we were unable to recover it. 00:37:40.523 [2024-12-16 12:59:06.240193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.240225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.240349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.240379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.240637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.240667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.240840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.240870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.241147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.241178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.241362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.241394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.241582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.241613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.241732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.241763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.241997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.242027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.242209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.242241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.242462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.242492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.242679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.242708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.242893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.242923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.243036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.243066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.243261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.243293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.243531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.243562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.243695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.243724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.243917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.243947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.244133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.244165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.244344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.244374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.244553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.244584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.244687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.244717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.244928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.244958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.245147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.245178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.245291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.245321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.245572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.245602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.245736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.245767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.246013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.246044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.246263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.246294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.246402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.246433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.246604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.524 [2024-12-16 12:59:06.246635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.524 qpair failed and we were unable to recover it. 00:37:40.524 [2024-12-16 12:59:06.246822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.246853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.247070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.247101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.247288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.247319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.247436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.247466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.247577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.247608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.247821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.247851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.247979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.248009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.248252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.248283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.248403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.248434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.248720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.248751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.248936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.248966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.249092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.249152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.249329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.249359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.249482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.249512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.249706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.249743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.249863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.249893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.250082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.250125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.250229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.250260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.250498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.250528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.250736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.250766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.250867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.250897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.251039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.251069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.251291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.251323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.251562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.251592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.251832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.251862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.252134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.252166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.252277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.252307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.252481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.252512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.252701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.252732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.252918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.252947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.253139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.253172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.253412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.253443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.253571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.253601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.253716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.253747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.254037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.254068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.254263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.254294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.254469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.254499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.254673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.254704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.525 [2024-12-16 12:59:06.254813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.525 [2024-12-16 12:59:06.254843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.525 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.254971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.255001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.255126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.255158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.255359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.255389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.255510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.255540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.255744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.255775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.255945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.255975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.256147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.256179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.256367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.256398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.256570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.256600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.256772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.256802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.256919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.256950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.257147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.257178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.257360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.257390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.257585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.257616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.257871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.257902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.408206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.408277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.408509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.408541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.408680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.408712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.408864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.408894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.409064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.409094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.409311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.409343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.409607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.409638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.409834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.409863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.409969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.409998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.410146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.410179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.410283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.410312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.410438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.410468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.410654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.410685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.410929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.410959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.411085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.411127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.411247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.411279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.411381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.411412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.411603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.411638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.411770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.411802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.411921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.411952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.412142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.526 [2024-12-16 12:59:06.412175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.526 qpair failed and we were unable to recover it. 00:37:40.526 [2024-12-16 12:59:06.412328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.412360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.412471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.412511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.412721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.412753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.412872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.412904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.413088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.413126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.413252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.413283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.413479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.413510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.413644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.413676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.413788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.413819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.413920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.413951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.414066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.414097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.414299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.414331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.414512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.414543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.414668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.414699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.414814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.414862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.415047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.415079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.415200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.415232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.415402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.415434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.415544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.415576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.415744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.415781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.415967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.415999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.416123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.416155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.416291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.416328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.416456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.416488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.416669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.416700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.416807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.416838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.416943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.416975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.417099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.417139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.417244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.417276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.417547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.417578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.417750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.417781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.417893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.417924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.418038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.418069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.418191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.418224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.418346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.418377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.418566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.527 [2024-12-16 12:59:06.418597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.527 qpair failed and we were unable to recover it. 00:37:40.527 [2024-12-16 12:59:06.418698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.418729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.418910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.418941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.419062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.419093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.419291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.419324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.419510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.419542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.419726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.419756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.419860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.419891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.420066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.420097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.420330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.420362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.420484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.420516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.420688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.420720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.420837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.420868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.420992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.421023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.421137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.421171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.421351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.421383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.421603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.421634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.421817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.421849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.421955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.421987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.422106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.422154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.422260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.422292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.422404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.422436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.422613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.422644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.422767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.422799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.422914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.422950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.423092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.423133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.423321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.423354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.423528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.423559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.423800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.423831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.423947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.423978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.424232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.424265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.424399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.424430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.424689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.424721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.424853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.424884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.424987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.425019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.425256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.425289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.425477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.425508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.425641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.425672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.425780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.425812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.425913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.528 [2024-12-16 12:59:06.425944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.528 qpair failed and we were unable to recover it. 00:37:40.528 [2024-12-16 12:59:06.426132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.426164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.426266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.426298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.426470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.426502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.426639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.426670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.426851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.426882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.426999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.427030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.427140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.427178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.427300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.427331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.427524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.427555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.427683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.427715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.427885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.427916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.428043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.428075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.428254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.428286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.428479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.428511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.428638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.428670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.428863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.428894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.429066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.429097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.429320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.429353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.429460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.429491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.429680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.429711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.429823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.429855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.430141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.430289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.430445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.430597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.430739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.430880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.430990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.431021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.431134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.431169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.431351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.529 [2024-12-16 12:59:06.431383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.529 qpair failed and we were unable to recover it. 00:37:40.529 [2024-12-16 12:59:06.431498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.431530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.431634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.431666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.431769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.431800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.431979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.432010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.432131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.432164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.432303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.432334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.432512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.432543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.432662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.432693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.432871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.432903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.433090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.433143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.433247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.433279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.433467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.433499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.433686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.433717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.433907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.433939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.434128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.434161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.434402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.434433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.434611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.434642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.434909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.434941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.435167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.435200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.435413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.435445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.435632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.435664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.435781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.435813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.436084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.436122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.436310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.436342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.436444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.436475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.436589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.436620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.436738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.436769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.436902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.436933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.437058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.437089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.437311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.437343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.437450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.437481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.437665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.437696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.437833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.437865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.437989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.438021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.438189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.438229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.438493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.438525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.438656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.438687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.438825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.438855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.530 [2024-12-16 12:59:06.439063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.530 [2024-12-16 12:59:06.439095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.530 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.439242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.439275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.439399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.439430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.439545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.439576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.439705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.439737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.439873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.439905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.440023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.440054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.440189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.440223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.440462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.440494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.440682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.440713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.440839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.440871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.441005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.441036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.441214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.441247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.441413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.441445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.441545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.441576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.441690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.441722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.441960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.441992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.442101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.442143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.442248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.442279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.442472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.442503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.442612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.442644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.442837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.442868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.442981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.443012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.443147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.443180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.443352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.443383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.443505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.443536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.443711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.443743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.443862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.443893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.444009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.444040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.444226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.444259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.444440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.444471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.444587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.444618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.444743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.444775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.445012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.445045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.445172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.445206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.445381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.445412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.445522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.531 [2024-12-16 12:59:06.445559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.531 qpair failed and we were unable to recover it. 00:37:40.531 [2024-12-16 12:59:06.445682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.445715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.445892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.445923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.446036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.446068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.446389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.446423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.446619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.446651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.446779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.446811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.446926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.446957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.447131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.447163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.447286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.447318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.447422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.447453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.447565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.447596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.447770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.447802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.447913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.447944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.448132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.448165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.448429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.448460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.448640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.448671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.448844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.448876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.448991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.449023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.449141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.449174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.449358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.449390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.449686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.449718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.449830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.449861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.450032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.450064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.450256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.450289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.450465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.450496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.450610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.450642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.450842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.450910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.451082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.451140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.451352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.451393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.451662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.451702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.451913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.451953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.452189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.452229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.452499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.452538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.452743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.452782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.452927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.452977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.453188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.453231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.453436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.453476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.453623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.453668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.532 qpair failed and we were unable to recover it. 00:37:40.532 [2024-12-16 12:59:06.453866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.532 [2024-12-16 12:59:06.453902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.454157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.454198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.454374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.454406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.454517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.454550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.454662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.454693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.454872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.454904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.455019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.455050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.455173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.455207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.455447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.455478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.455591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.455622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.455731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.455762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.455964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.455995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.456197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.456230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.456465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.456497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.456604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.456635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.456774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.456805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.456916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.456947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.457065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.457097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.457249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.457281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.457526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.457558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.457726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.457758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.457867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.457898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.458079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.458110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.458239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.458271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.458386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.458418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.458534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.458566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.458686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.458717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.458886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.458918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.459156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.459227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.459369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.459405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.459519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.459550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.459760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.459792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.460028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.460061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.460197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.460234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.460414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.460446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.460635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.460667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.460794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.460825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.461015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.533 [2024-12-16 12:59:06.461047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.533 qpair failed and we were unable to recover it. 00:37:40.533 [2024-12-16 12:59:06.461192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.461227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.461399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.461430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.461538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.461570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.461699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.461741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.461852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.461884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.461991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.462024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.462141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.462175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.462348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.462380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.462571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.462603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.462716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.462747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.462934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.462966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.463204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.463237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.463370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.463402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.463576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.463609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.463732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.463764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.463934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.463966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.464187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.464224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.464475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.464507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.464745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.464776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.464896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.464927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.465135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.465169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.465351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.465383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.465496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.465528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.465709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.465742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.465932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.465964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.466140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.466174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.466300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.466333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.466510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.466542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.466645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.466677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.466859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.466891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.467073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.467106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.467312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.467347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.467602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.467634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.534 [2024-12-16 12:59:06.467807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.534 [2024-12-16 12:59:06.467839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.534 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.467954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.467986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.468098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.468148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.468261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.468294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.468482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.468513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.468681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.468712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.468831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.468862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.469026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.469057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.469185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.469218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.469408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.469439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.469568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.469606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.469806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.469837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.469953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.469985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.470086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.470131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.470253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.470284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.470399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.470430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.470610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.470641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.470812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.470844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.471041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.471073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.471199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.471232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.471346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.471377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.471588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.471620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.471740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.471771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.471871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.471902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.472101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.472157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.472280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.472312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.472513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.472545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.472654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.472686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.472801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.472832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.472949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.472980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.473154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.473189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.473293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.473326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.473515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.473547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.473770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.473801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.473906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.473937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.474112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.474167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.474294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.474325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.474533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.535 [2024-12-16 12:59:06.474580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.535 qpair failed and we were unable to recover it. 00:37:40.535 [2024-12-16 12:59:06.474785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.474822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.475034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.475070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.475280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.475318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.475505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.475540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.475665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.475707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.475846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.475882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.476006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.476047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.476315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.476356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.476565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.476599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.476792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.476825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.476946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.476977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.477096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.477145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.477335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.477374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.477482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.477513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.477633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.477664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.477767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.477799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.477997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.478035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.478209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.478246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.478418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.478449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.478623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.478656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.478788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.478820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.478938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.478969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.479138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.479171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.479356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.479388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.479561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.479592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.479790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.479821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.479951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.479984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.480086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.480124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.480299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.480332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.480451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.480489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.480602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.480633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.480817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.480848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.480981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.481013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.481206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.481242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.481365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.481397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.481605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.481637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.481846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.481878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.482091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.536 [2024-12-16 12:59:06.482135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.536 qpair failed and we were unable to recover it. 00:37:40.536 [2024-12-16 12:59:06.482308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.482341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.482521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.482555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.482685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.482717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.482902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.482934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.483124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.483158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.483424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.483457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.483560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.483591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.483723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.483756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.483875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.483907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.484085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.484126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.484300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.484332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.484444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.484476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.484604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.484636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.484758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.484790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.484910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.484948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.485173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.485209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.485389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.485421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.485591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.485624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.485745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.485777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.485944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.485977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.486100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.486153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.486259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.486291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.486463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.486495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.486662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.486694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.486933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.486965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.487135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.487168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.487415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.487448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.487565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.487598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.487772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.487805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.488079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.488111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.488307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.488339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.488465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.488497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.488623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.488655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.488893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.488924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.489096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.489145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.489349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.489382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.489568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.489601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.489719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.489751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.537 [2024-12-16 12:59:06.489986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.537 [2024-12-16 12:59:06.490017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.537 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.490144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.490186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.490387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.490418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.490559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.490591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.490765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.490797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.490913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.490944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.491058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.491089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.491217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.491250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.491367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.491399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.491588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.491619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.491814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.491846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.492012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.492044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.492234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.492266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.492374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.492406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.492595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.492627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.492826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.492858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.493070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.493109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.493304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.493336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.493455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.493486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.493701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.493732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.493855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.493886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.494019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.494050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.494173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.494209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.494430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.494462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.494669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.494701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.494880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.494912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.495101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.495144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.495268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.495300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.495408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.495440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.495563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.495595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.495801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.495834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.496006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.496038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.496209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.496242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.538 qpair failed and we were unable to recover it. 00:37:40.538 [2024-12-16 12:59:06.496362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.538 [2024-12-16 12:59:06.496394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.496527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.496559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.496666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.496698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.496886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.496918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.497103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.497144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.497393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.497425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.497618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.497650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.497771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.497803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.497919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.497950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.498209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.498245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.498356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.498395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.498586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.498618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.498812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.498844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.498973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.499005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.499204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.499237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.499347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.499379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.499573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.499605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.499847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.499880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.499996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.500028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.500210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.500242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.500442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.500474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.500577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.500609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.500792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.500824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.501023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.501055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.501185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.501217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.501340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.501372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.501547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.501579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.501681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.501712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.501826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.501858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.502107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.502165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.502297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.502329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.502498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.502530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.502701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.502733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.503041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.503073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.503231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.503266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.503375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.539 [2024-12-16 12:59:06.503407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.539 qpair failed and we were unable to recover it. 00:37:40.539 [2024-12-16 12:59:06.503591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.503623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.503873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.503905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.504088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.504142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.504347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.504380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.504605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.504637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.504751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.504782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.504904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.504935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.505108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.505152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.505269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.505301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.505407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.505439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.505556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.505587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.505773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.505803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.505993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.506025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.506210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.506243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.506417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.506454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.506565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.506596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.506792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.506823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.506927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.506958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.507083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.507125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.507316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.507350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.507469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.507500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.507620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.507652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.507754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.507785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.507908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.507939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.508056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.508088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.508230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.508262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.508383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.508414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.508596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.508628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.508807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.508838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.508964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.508996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.509254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.509287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.509410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.509442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.509582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.509613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.509808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.509840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.510033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.510064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.510186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.510220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.510342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.510374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.510491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.510523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.510706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.540 [2024-12-16 12:59:06.510738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.540 qpair failed and we were unable to recover it. 00:37:40.540 [2024-12-16 12:59:06.510973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.511136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.511313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.511457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.511606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.511807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.511947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.511978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.512165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.512199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.512311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.512343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.512467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.512499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.512736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.512769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.512872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.512904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.513078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.513108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.513285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.513317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.513463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.513494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.513610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.513647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.513763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.513795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.513975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.514007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.514173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.514205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.514462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.514494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.514609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.514641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.514826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.514857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.515045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.515077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.515215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.515252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.515372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.515403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.515610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.515642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.515763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.515796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.515948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.515979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.516108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.516177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.516377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.516410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.516595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.516626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.516749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.516781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.516978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.517009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.517136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.517169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.517390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.517421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.517550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.517581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.517683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-16 12:59:06.517714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.541 qpair failed and we were unable to recover it. 00:37:40.541 [2024-12-16 12:59:06.517863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.517895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.518015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.518047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.518166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.518199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.518326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.518357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.518537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.518568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.518747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.518778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.518966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.518998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.519107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.519158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.519335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.519367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.519468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.519499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.519670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.519701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.519809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.519841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.520103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.520146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.520322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.520353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.520486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.520517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.520622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.520654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.520755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.520787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.520959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.520990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.521173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.521212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.521382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.521413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.521584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.521615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.521729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.521760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.521861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.521892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.522078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.522110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.522261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.522293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.522474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.522504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.522687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.522719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.522821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.522853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.523024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.523055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.523286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.523322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.523446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.523478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.523602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.523633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.523806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.523839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.523961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.523993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.524229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.524263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.524383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.524416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.524607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.524639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.524822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.524854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.524983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-16 12:59:06.525015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.542 qpair failed and we were unable to recover it. 00:37:40.542 [2024-12-16 12:59:06.525141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.525174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.525355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.525386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.525558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.525590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.525791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.525823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.525942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.525974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.526096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.526138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.526263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.526296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.526465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.526497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.526613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.526645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.526759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.526791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.526924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.526955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.527077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.527109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.527255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.527289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.527459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.527491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.527669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.527701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.527814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.527846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.528025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.528056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.528183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.528220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.528335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.528369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.528493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.528532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.528724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.528756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.528933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.528965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.529077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.529109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.529321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.529354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.529462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.529494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.529612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.529644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.529755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.529786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.529894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.529925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.530047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.530079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.530232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.530266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.530391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.530423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.543 [2024-12-16 12:59:06.530528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.543 [2024-12-16 12:59:06.530559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.543 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.530732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.530764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.530874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.530906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.531944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.531975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.532078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.532109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.532250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.532284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.532451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.532482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.532650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.532681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.532795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.532827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.532952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.532983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.533242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.533276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.533454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.533487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.533608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.533639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.533825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.533857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.533970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.534002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.534146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.534179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.534350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.534382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.534487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.534519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.534622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.534653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.534851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.534883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.535008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.535040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.535145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.535178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.535308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.535346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.535516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.535549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.535721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.535752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.535866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.535898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.536012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.536044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.536218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.536251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.536355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.536386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.536645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.536677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.536847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.536880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.544 [2024-12-16 12:59:06.537072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.544 [2024-12-16 12:59:06.537104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.544 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.537241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.537275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.537387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.537419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.537523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.537554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.537667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.537698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.537871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.537902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.538006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.538037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.538251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.538284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.538456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.538487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.538737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.538769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.538892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.538922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.539038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.539070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.539205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.539238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.539410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.539441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.539639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.539670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.539775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.539807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.539979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.540011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.540130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.540163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.540286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.540317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.540487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.540518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.540690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.540722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.540929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.540961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.541076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.541107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.541309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.541343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.541580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.541612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.541749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.541780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.541960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.541992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.542166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.542200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.542369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.542400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.542584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.542615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.542723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.542755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.542859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.542897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.543000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.543032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.543152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.543184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.543355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.543388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.543492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.543523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.543698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.543728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.543923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.543954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.545 qpair failed and we were unable to recover it. 00:37:40.545 [2024-12-16 12:59:06.544164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.545 [2024-12-16 12:59:06.544199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.544441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.544476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.544706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.544739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.544859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.544891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.545066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.545097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.545294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.545329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.545501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.545532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.545713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.545745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.545941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.545975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.546252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.546286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.546401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.546432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.546549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.546581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.546768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.546800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.546903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.546933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.547054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.547086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.547401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.547470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.547693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.547765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.547909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.547945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.548148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.548185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.548311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.548343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.548643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.548698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.548847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.548888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.549085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.549135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.549293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.549334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.549477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.549520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.549653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.549695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.549842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.549884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.550023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.550065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.550232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.550275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.550413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.550453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.550662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.550705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.550836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.550874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.551037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.551077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.551355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.551416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.551608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.551642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.551751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.551786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.551893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.551925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.552042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.552074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.552207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.552246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.546 [2024-12-16 12:59:06.552509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.546 [2024-12-16 12:59:06.552541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.546 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.552709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.552741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.552911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.552943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.553126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.553159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.553329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.553361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.553529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.553560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.553799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.553830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.553930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.553962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.554107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.554261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.554404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.554605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.554757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.554891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.554995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.555026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.555218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.555252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.555447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.555478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.555660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.555692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.555949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.555981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.556099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.556143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.556320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.556352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.556530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.556562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.556698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.556730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.556851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.556882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.557006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.557037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.557221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.557254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.557394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.557426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.557604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.557635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.557752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.557782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.547 [2024-12-16 12:59:06.557996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.547 [2024-12-16 12:59:06.558028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.547 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.558147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.558181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.558310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.558343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.558526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.558559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.558660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.558693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.558820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.558852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.558984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.559018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.559188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.559222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.559401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.559432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.559534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.559565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.559737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.559769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.559871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.559902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.560006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.560038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.560305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.560356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.560467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.560498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.560731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.560760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.560881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.560910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.561127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.561160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.561340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.561369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.561470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.561499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.561690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.561718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.561828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.561858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.561958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.561987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.562191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.562222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.562392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.562421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.562521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.562550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.562779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.840 [2024-12-16 12:59:06.562808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.840 qpair failed and we were unable to recover it. 00:37:40.840 [2024-12-16 12:59:06.562901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.562929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.563088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.563125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.563228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.563257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.563370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.563399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.563518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.563547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.563701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.563731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.563839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.563868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.564130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.564161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.564350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.564380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.564505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.564533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.564643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.564672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.564927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.564970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.565162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.565194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.565387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.565419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.565546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.565577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.565695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.565726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.566014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.566046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.566308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.566342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.566583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.566615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.566813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.566850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.566987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.567019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.567188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.567221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.567482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.567514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.567764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.567796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.567967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.568004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.568253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.568286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.568391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.568422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.568607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.568639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.568773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.568804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.569014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.569045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.569213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.569246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.569368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.569400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.569604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.569635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.569755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.569787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.569987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.570018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.570152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.570186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.570316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.570349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.570523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.841 [2024-12-16 12:59:06.570554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.841 qpair failed and we were unable to recover it. 00:37:40.841 [2024-12-16 12:59:06.570683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.570714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.570887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.570918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.571088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.571130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.571265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.571297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.571478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.571509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.571693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.571724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.571910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.571940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.572180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.572211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.572323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.572354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.572474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.572504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.572677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.572708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.572885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.572915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.573175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.573208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.573331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.573362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.573598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.573629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.573814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.573846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.574028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.574059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.574240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.574272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.574441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.574473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.574580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.574611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.574731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.574762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.574886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.574924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.575126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.575159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.575326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.575358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.575539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.575570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.575691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.575721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.575899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.575930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.576135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.576169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.576342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.576373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.576565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.576595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.576717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.576748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.576878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.576908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.577026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.577056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.577248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.577281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.577397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.577427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.577539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.577570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.577738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.577769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.578026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.578057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.578244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.842 [2024-12-16 12:59:06.578276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.842 qpair failed and we were unable to recover it. 00:37:40.842 [2024-12-16 12:59:06.578395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.578426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.578545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.578575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.578782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.578814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.578986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.579017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.579145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.579177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.579411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.579443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.579631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.579662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.579925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.579956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.580080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.580112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.580234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.580265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.580379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.580409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.580513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.580544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.580720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.580750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.580855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.580886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.581054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.581086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.581221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.581253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.581490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.581522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.581759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.581790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.581908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.581939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.582074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.582104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.582302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.582333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.582454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.582485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.582664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.582702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.582892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.582923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.583043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.583074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.583326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.583359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.583533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.583564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.583734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.583764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.583971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.584002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.584134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.584167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.584348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.584380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.584563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.584595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.584779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.584811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.585000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.585031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.585202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.585234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.843 [2024-12-16 12:59:06.585443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.843 [2024-12-16 12:59:06.585474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.843 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.585668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.585699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.585869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.585900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.586160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.586193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.586317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.586349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.586520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.586550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.586737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.586767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.587032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.587063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.587243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.587275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.587457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.587487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.587589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.587620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.587796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.587827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.588009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.588039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.588164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.588196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.588309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.588339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.588458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.588488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.588597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.588629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.588889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.588920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.589109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.589152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.589270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.589301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.589540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.589571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.589682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.589714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.589815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.589844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.590089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.590131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.590312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.590342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.590532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.590563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.590746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.590778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.590958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.590995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.591184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.591219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.591347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.591378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.591546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.591577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.591808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.591839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.844 [2024-12-16 12:59:06.592103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.844 [2024-12-16 12:59:06.592145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.844 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.592248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.592279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.592551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.592582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.592782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.592814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.592928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.592960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.593141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.593174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.593363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.593395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.593589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.593620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.593815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.593847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.594036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.594068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.594317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.594350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.594563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.594595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.594771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.594802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.594920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.594950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.595064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.595094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.595299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.595331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.595547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.595578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.595886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.595917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.596104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.596145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.596420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.596451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.596624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.596656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.596844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.596874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.596995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.597026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.597138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.597170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.597294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.597324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.597503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.597534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.597707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.597737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.597911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.597942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.598183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.598215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.598457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.598488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.598745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.598776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.598893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.598924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.599095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.599153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.599392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.599424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.599541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.599571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.599700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.599741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.599914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.599945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.600050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.600079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.600331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.600364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.845 [2024-12-16 12:59:06.600579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.845 [2024-12-16 12:59:06.600610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.845 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.600842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.600872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.601046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.601076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.601264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.601296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.601481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.601513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.601687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.601719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.601829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.601859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.602095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.602139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.602255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.602286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.602460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.602491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.602736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.602767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.602870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.602900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.603151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.603185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.603286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.603317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.603518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.603550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.603716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.603746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.603934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.603965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.604149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.604182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.604420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.604453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.604582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.604613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.604728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.604758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.604935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.604965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.605082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.605123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.605287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.605358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.605629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.605664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.605905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.605938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.606185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.606220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.606396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.606427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.606666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.606698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.606885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.606916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.607103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.607162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.607371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.607403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.607616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.607649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.607755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.607786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.607997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.608028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.608147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.608181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.608364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.608413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.608630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.846 [2024-12-16 12:59:06.608662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.846 qpair failed and we were unable to recover it. 00:37:40.846 [2024-12-16 12:59:06.608802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.608833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.609010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.609042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.609226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.609260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.609459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.609490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.609667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.609698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.609874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.609906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.610077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.610107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.610277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.610310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.610555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.610587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.610701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.610733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.610900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.610932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.611053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.611084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.611232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.611268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.611382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.611413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.611600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.611632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.611815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.611847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.612131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.612164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.612336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.612368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.612543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.612574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.612747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.612778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.613038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.613070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.613200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.613234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.613418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.613450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.613581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.613612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.613813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.613845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.613954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.613986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.614133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.614167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.614337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.614370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.614495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.614526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.614640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.614672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.614777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.614808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.615025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.615056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.615267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.615304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.847 [2024-12-16 12:59:06.615487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.847 [2024-12-16 12:59:06.615519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.847 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.615707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.615739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.615859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.615891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.616038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.616071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.616275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.616308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.616420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.616458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.616642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.616675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.616913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.616945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.617137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.617170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.617348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.617380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.617583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.617615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.617856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.617888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.618065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.618097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.618380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.618415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.618528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.618559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.618729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.618760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.619019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.619051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.619237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.619270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.619455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.619487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.619699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.619731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.619975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.620008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.620145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.620177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.620363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.620395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.620575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.620607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.620713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.620744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.620872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.620903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.621125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.621158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.621395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.621426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.621543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.621574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.621675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.621707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.621877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.621908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.622028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.622060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.622225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.622296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.622499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.622534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.622671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.622703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.622880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.622912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.623103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.623148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.848 qpair failed and we were unable to recover it. 00:37:40.848 [2024-12-16 12:59:06.623270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.848 [2024-12-16 12:59:06.623302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.623435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.623467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.623652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.623683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.623865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.623896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.624073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.624104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.624296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.624329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.624535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.624567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.624853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.624884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.625149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.625183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.625312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.625344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.625553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.625584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.625691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.625723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.625827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.625858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.626049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.626081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.626213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.626245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.626373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.626405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.626524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.626555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.626817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.626849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.626954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.626986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.627169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.627202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.627379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.627410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.627513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.627545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.627733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.627771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.628008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.628040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.628148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.628182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.628353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.628385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.628643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.628675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.628848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.628880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.629057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.629089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.629205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.629237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.629493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.629561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.629768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.629810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.629994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.849 [2024-12-16 12:59:06.630025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.849 qpair failed and we were unable to recover it. 00:37:40.849 [2024-12-16 12:59:06.630205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.630242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.630377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.630410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.630519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.630551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.630809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.630841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.631015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.631047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.631310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.631343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.631591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.631623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.631791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.631822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.631923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.631954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.632204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.632237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.632377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.632408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.632525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.632556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.632751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.632782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.632905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.632936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.633149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.633181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.633451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.633483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.633626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.633669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.633863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.633897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.634089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.634132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.634317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.634349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.634607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.634639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.634828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.634859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.635131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.635164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.635373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.635405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.635643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.635674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.635810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.635841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.636018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.636049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.636185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.636218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.636394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.636425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.636552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.636592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.636710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.636742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.636918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.636950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.637185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.637218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.637388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.637419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.637600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.637631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.637758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.637790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.638031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.850 [2024-12-16 12:59:06.638063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.850 qpair failed and we were unable to recover it. 00:37:40.850 [2024-12-16 12:59:06.638259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.638292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.638465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.638496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.638626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.638657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.638839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.638870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.639009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.639039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.639231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.639263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.639401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.639433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.639630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.639662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.639846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.639876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.639981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.640012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.640200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.640234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.640351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.640381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.640487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.640518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.640692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.640724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.640831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.640861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.641038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.641069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.641248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.641282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.641519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.641551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.641688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.641719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.641902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.641934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.642123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.642155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.642284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.642314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.642518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.642550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.642735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.642766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.642875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.642905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.643102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.643144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.643276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.643309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.643476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.643506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.643758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.643791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.643911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.643941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.644146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.644179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.644304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.644336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.644467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.644504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.644802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.644834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.645024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.645055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.645176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.645208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.645346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.645378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.645549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.645582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.645767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.851 [2024-12-16 12:59:06.645797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.851 qpair failed and we were unable to recover it. 00:37:40.851 [2024-12-16 12:59:06.646034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.646064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.646195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.646231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.646416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.646448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.646627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.646659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.646848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.646879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.647073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.647103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.647235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.647267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.647560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.647593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.647770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.647802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.647908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.647940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.648075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.648106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.648373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.648405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.648590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.648622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.648814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.648845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.649016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.649047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.649244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.649278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.649464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.649496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.649615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.649647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.649769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.649800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.649998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.650030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.650231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.650266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.650553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.650585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.650764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.650796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.650969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.651000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.651213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.651246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.651494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.651526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.651722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.651753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.651932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.651964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.652144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.652176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.652277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.652308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.652546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.652578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.652749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.652781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.652916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.652948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.653150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.653188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.653372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.653405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.653522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.653553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.852 [2024-12-16 12:59:06.653749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.852 [2024-12-16 12:59:06.653781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.852 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.653957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.653989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.654104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.654145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.654348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.654380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.654494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.654526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.654639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.654670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.654808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.654840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.655027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.655058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.655359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.655393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.655502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.655534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.655637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.655669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.655793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.655825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.656054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.656086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.656214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.656246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.656434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.656466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.656601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.656632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.656740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.656772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.656960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.656992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.657105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.657148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.657388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.657420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.657588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.657620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.657802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.657834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.657950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.657981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.658086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.658138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.658262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.658296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.658491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.658523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.658625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.658656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.658910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.658942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.659108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.659151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.659273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.659305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.659489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.659520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.659780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.659812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.660072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.660103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.660241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.660273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.660390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.660421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.660542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.660574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.660821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.660853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.661034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.661071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.661275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.661308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.661461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.853 [2024-12-16 12:59:06.661493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.853 qpair failed and we were unable to recover it. 00:37:40.853 [2024-12-16 12:59:06.661706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.661737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.661856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.661888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.662078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.662110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.662312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.662344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.662544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.662575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.662696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.662728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.662903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.662934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.663046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.663077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.663270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.663304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.663483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.663514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.663697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.663729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.663928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.663961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.664076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.664108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.664329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.664362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.664487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.664519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.664701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.664733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.664939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.664971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.665097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.665137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.665322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.665354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.665455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.665486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.665667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.665698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.665870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.665901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.666009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.666041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.666221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.666255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.666451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.666483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.666674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.666705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.666882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.666915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.667084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.667123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.667397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.667429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.667612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.667644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.667822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.667853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.668025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.668056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.668265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.854 [2024-12-16 12:59:06.668304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.854 qpair failed and we were unable to recover it. 00:37:40.854 [2024-12-16 12:59:06.668413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.668444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.668663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.668695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.668873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.668905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.669147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.669179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.669281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.669324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.669632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.669663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.669764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.669795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.669962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.669993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.670164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.670197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.670444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.670476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.670717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.670748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.670863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.670895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.671028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.671059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.671261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.671293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.671405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.671437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.671551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.671582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.671716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.671747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.671860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.671892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.672019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.672051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.672173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.672206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.672416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.672448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.672646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.672678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.672812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.672844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.673032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.673063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.673190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.673223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.673335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.673366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.673492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.673524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.673719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.673751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.673987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.674019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.674156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.674188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.674432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.674464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.674714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.674746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.674871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.674903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.675144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.675194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.675425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.675456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.675564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.675595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.675769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.675800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.855 qpair failed and we were unable to recover it. 00:37:40.855 [2024-12-16 12:59:06.676058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.855 [2024-12-16 12:59:06.676089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.676218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.676250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.676380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.676411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.676589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.676620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.676792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.676823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.676995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.677027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.677198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.677231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.677495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.677527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.677653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.677685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.677873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.677904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.678020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.678052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.678269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.678302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.678422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.678454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.678621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.678653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.678889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.678920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.679159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.679193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.679309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.679341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.679516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.679547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.679663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.679694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.679808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.679840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.679964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.679995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.680132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.680166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.680294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.680326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.680499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.680530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.680780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.680811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.680993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.681024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.681198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.681231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.681375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.681407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.681513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.681544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.681721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.681752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.681959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.681991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.682090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.682131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.682247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.682278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.682403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.682435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.682574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.682611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.682724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.682756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.682869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.682901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.683007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.683037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.856 [2024-12-16 12:59:06.683148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.856 [2024-12-16 12:59:06.683181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.856 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.683371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.683403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.683539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.683571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.683677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.683709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.683813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.683845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.683947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.683978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.684104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.684145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.684315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.684347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.684540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.684572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.684768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.684799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.684985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.685017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.685202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.685234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.685353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.685384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.685503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.685534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.685639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.685671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.685860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.685891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.686078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.686109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.686243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.686275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.686378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.686410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.686609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.686641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.686821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.686852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.686974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.687006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.687194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.687227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.687345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.687377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.687489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.687520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.687635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.687666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.687886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.687918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.688094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.688135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.688316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.688347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.688488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.688520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.688766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.688798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.688928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.688959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.689197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.689230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.689397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.689429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.689604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.689635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.689738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.689769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.689953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.689991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.690167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.690200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.857 [2024-12-16 12:59:06.690380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.857 [2024-12-16 12:59:06.690411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.857 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.690528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.690559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.690690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.690722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.690907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.690939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.691134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.691166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.691294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.691325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.691432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.691464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.691593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.691624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.691876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.691907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.692076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.692106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.692226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.692258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.692447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.692479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.692669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.692701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.692879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.692911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.693096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.693137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.693420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.693452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.693576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.693608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.693812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.693843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.693988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.694020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.694163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.694196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.694315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.694347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.694525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.694556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.694723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.694754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.694874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.694906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.695087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.695128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.695257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.695288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.695411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.695442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.695621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.695652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.858 [2024-12-16 12:59:06.695843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.858 [2024-12-16 12:59:06.695873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.858 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.696112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.696153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.696257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.696289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.696474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.696505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.696745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.696776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.696948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.696979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.697161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.697194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.697388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.697420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.697525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.697556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.697656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.697687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.697898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.697935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.698129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.698162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.698269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.698301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.698470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.698501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.698621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.698652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.698846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.698877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.699084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.699125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.699299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.699330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.699446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.699477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.699663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.699694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.699942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.699973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.700157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.700190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.700316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.700348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.700523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.700553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.700734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.700765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.700959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.700991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.701166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.701198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.701447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.701479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.701612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.701643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.701822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.701854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.701968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.701999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.702096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.702145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.702312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.702344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.702514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.702546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.702668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.702699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.702831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.859 [2024-12-16 12:59:06.702862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.859 qpair failed and we were unable to recover it. 00:37:40.859 [2024-12-16 12:59:06.703104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.703144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.703339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.703371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.703540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.703571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.703785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.703816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.704067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.704097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.704288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.704321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.704429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.704461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.704704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.704735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.704925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.704957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.705134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.705167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.705351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.705383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.705583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.705615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.705714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.705745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.705870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.705901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.706001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.706037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.706139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.706172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.706343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.706374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.706575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.706607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.706846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.706877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.707095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.707137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.707321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.707352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.707523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.707554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.707729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.707761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.707884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.707916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.708101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.708141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.708276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.708307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.708442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.708473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.708708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.708739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.708943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.708975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.709216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.709248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.709388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.709419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.709531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.709563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.709735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.709766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.709867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.709898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.710011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.710041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.710157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.710189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.710359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.710390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.710632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.710664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.860 [2024-12-16 12:59:06.710929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.860 [2024-12-16 12:59:06.710961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.860 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.711101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.711154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.711283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.711314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.711497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.711528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.711648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.711679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.711785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.711816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.711925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.711956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.712080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.712112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.712294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.712325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.712429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.712461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.712652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.712684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.712876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.712907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.713090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.713130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.713311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.713343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.713543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.713575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.713685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.713716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.713836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.713874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.714151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.714184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.714422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.714454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.714644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.714677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.714914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.714945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.715134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.715167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.715347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.715380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.715622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.715653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.715910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.715941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.716137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.716169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.716409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.716440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.716628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.716659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.716783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.716815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.716940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.716972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.717159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.717192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.717365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.717397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.717503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.717535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.717634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.717665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.717792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.717824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.718035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.718066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.718258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.718291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.718550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.861 [2024-12-16 12:59:06.718581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.861 qpair failed and we were unable to recover it. 00:37:40.861 [2024-12-16 12:59:06.718699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.718730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.718915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.718946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.719086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.719127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.719236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.719268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.719385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.719417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.719546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.719577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.719748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.719779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.720018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.720049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.720243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.720275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.720388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.720419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.720527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.720558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.720675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.720706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.720872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.720903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.721101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.721143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.721390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.721421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.721610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.721641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.721746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.721778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.721950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.721980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.722160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.722199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.722373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.722404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.722516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.722547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.722729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.722760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.722934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.722965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.723148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.723181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.723426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.723458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.723571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.723602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.723786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.723817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.724000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.724031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.724200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.724233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.724357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.724388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.724523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.724554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.724728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.724759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.724892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.724924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.725110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.725151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.725323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.725354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.725520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.725550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.725655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.725686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.725797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.862 [2024-12-16 12:59:06.725827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.862 qpair failed and we were unable to recover it. 00:37:40.862 [2024-12-16 12:59:06.725996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.726027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.726147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.726180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.726366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.726397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.726570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.726602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.726787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.726819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.726922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.726953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.727211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.727244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.727457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.727488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.727657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.727688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.727872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.727904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.728085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.728127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.728258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.728290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.728481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.728512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.728748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.728779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.728950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.728982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.729151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.729184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.729425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.729456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.729656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.729688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.729807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.729838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.729968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.729999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.730127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.730165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.730297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.730328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.730518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.730550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.730722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.730753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.730926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.730957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.731085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.731123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.731290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.731321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.731511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.731541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.731646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.731677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.731919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.731951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.863 qpair failed and we were unable to recover it. 00:37:40.863 [2024-12-16 12:59:06.732069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.863 [2024-12-16 12:59:06.732100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.732285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.732317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.732439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.732471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.732597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.732629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.732742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.732774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.732946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.732977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.733099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.733142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.733320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.733351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.733544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.733575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.733816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.733848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.733950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.733981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.734158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.734191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.734305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.734336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.734460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.734491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.734674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.734705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.734913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.734945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.735124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.735157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.735272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.735304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.735474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.735506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.735686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.735718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.735903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.735934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.736148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.736181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.736323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.736355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.736532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.736563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.736735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.736766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.736873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.736903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.737152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.737185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.737320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.737352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.737474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.737504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.737739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.737769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.737941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.737978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.738152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.738186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.738370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.738401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.738596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.738628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.738810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.738840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.739010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.739041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.739177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.739208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.739412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.739443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.739563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.864 [2024-12-16 12:59:06.739594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.864 qpair failed and we were unable to recover it. 00:37:40.864 [2024-12-16 12:59:06.739704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.739734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.739919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.739950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.740082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.740205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.740325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.740356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.740642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.740675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.740867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.740898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.741076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.741106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.741325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.741355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.741468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.741500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.741750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.741782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.741892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.741921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.742111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.742154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.742270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.742300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.742418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.742448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.742692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.742724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.742913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.742944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.743140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.743172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.743302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.743334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.743501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.743571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.743718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.743754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.743957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.743990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.744249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.744285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.744572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.744605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.744777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.744809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.744945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.744977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.745168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.745202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.745310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.745343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.745522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.745554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.745729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.745762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.745948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.745980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.746182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.746216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.746400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.746432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.746621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.746653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.746780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.746812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.746933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.746965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.747149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.747183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.747361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.747393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.747520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.747551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.747664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.865 [2024-12-16 12:59:06.747696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.865 qpair failed and we were unable to recover it. 00:37:40.865 [2024-12-16 12:59:06.747821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.747852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.748027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.748059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.748250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.748283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.748456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.748489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.748604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.748636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.748806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.748837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.749007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.749046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.749159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.749193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.749313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.749345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.749512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.749544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.749659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.749691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.749800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.749832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.750018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.750050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.750244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.750278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.750388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.750419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.750548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.750580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.750773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.750806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.750937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.750969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.751153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.751186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.751290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.751321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.751439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.751472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.751663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.751695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.751964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.751996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.752225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.752258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.752374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.752406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.752516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.752548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.752666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.752698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.752869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.752901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.753096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.753140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.753267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.753300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.753481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.753514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.753689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.753721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.753850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.753882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.754006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.754045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.754238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.754272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.754446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.754479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.754665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.754699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.754909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.866 [2024-12-16 12:59:06.754941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.866 qpair failed and we were unable to recover it. 00:37:40.866 [2024-12-16 12:59:06.755070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.755102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.755284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.755317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.755508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.755540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.755640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.755673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.755783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.755815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.755986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.756017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.756188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.756221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.756392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.756424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.756558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.756590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.756704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.756736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.756908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.756939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.757134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.757169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.757283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.757314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.757499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.757531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.757644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.757676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.757800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.757831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.757953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.757985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.758089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.758133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.758252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.758284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.758458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.758490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.758601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.758633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.758754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.758786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.758889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.758926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.759052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.759085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.759249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.759318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.759503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.759538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.759721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.759753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.759872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.759902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.760007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.760038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.760218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.760251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.760374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.760406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.760590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.760620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.760788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.760820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.761045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.761188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.761388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.761599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.761746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.761877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.867 [2024-12-16 12:59:06.761988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.867 [2024-12-16 12:59:06.762020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.867 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.762197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.762230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.762402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.762432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.762529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.762561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.762742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.762774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.762946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.762976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.763074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.763105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.763301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.763334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.763453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.763484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.763678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.763708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.763851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.763895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.764089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.764135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.764257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.764291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.764440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.764471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.764661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.764693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.764882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.764914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.765084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.765131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.765258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.765292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.765416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.765448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.765575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.765606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.765783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.765815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.765989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.766020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.766138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.766171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.868 [2024-12-16 12:59:06.766304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.868 [2024-12-16 12:59:06.766345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.868 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.766481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.766512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.766684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.766715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.766891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.766923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.767043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.767074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.767224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.767256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.767427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.767459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.767645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.767675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.767913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.767944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.768209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.768243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.768345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.768375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.768551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.768582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.768708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.768740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.768918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.768950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.769094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.769143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.769397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.769429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.769559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.769591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.769851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.769883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.770071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.770102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.770358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.770390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.770628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.770660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.770840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.770871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.771103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.771143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.771331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.771363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.771552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.771583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.771763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.771795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.772036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.772067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.772246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.772285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.772478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.772510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.772628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.772659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.772767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.772799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.772911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.772943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.773147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.773188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.773377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.773409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.773529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.773561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.773797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.773828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.774069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.774101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.774222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.869 [2024-12-16 12:59:06.774255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.869 qpair failed and we were unable to recover it. 00:37:40.869 [2024-12-16 12:59:06.774449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.774480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.774587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.774619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.774727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.774759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.774953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.774985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.775104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.775148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.775456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.775488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.775594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.775625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.775817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.775848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.776026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.776058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.776180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.776213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.776337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.776368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.776489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.776521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.776639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.776670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.776909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.776941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.777124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.777167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.777340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.777372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.777489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.777521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.777693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.777724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.777895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.777927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.778099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.778151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.778321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.778353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.778470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.778502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.778706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.778737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.778928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.778960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.779246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.779280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.779385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.779417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.779609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.779640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.779838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.779869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.779972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.780004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.780267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.780305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.780529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.780560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.780692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.780723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.780893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.780924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.781111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.781169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.781296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.781327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.781436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.781468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.781653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.781684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.781923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.781955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.870 [2024-12-16 12:59:06.782148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.870 [2024-12-16 12:59:06.782181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.870 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.782386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.782417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.782552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.782584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.782871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.782903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.783016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.783048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.783329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.783362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.783486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.783517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.783695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.783727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.783940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.783971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.784141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.784174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.784366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.784398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.784648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.784679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.784860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.784892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.785089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.785133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.785365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.785398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.785589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.785621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.785742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.785774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.785907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.785938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.786159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.786194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.786453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.786486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.786750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.786782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.786967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.786999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.787175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.787208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.787378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.787410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.787621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.787655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.787897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.787928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.788166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.788199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.788371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.788402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.788509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.788540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.788803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.788835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.789102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.789155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.789350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.789388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.789502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.789533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.789802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.789834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.790079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.790110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.790298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.790329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.790511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.790543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.790815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.871 [2024-12-16 12:59:06.790846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.871 qpair failed and we were unable to recover it. 00:37:40.871 [2024-12-16 12:59:06.791039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.791070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.791190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.791223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.791409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.791441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.791562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.791593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.791830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.791861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.792051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.792083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.792331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.792364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.792558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.792590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.792696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.792728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.792903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.792935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.793127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.793169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.793341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.793373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.793504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.793536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.793705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.793736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.793927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.793959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.794236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.794270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.794516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.794547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.794788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.794820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.795032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.795063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.795247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.795280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.795430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.795462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.795591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.795623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.795793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.795825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.796027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.796059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.796174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.796207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.796318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.796349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.796534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.796565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.796694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.796727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.796830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.796862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.797057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.797089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.797219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.797255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.797427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.797458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.797576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.797607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.797785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.797823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.872 [2024-12-16 12:59:06.797932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.872 [2024-12-16 12:59:06.797963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.872 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.798153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.798186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.798295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.798327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.798531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.798562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.798665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.798697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.798935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.798966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.799139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.799172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.799296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.799327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.799598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.799630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.799766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.799797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.800040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.800071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.800262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.800294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.800485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.800516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.800650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.800682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.800964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.800996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.801173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.801209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.801422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.801454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.801692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.801724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.801977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.802009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.802151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.802186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.802396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.802428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.802603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.802635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.802803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.802835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.803012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.803044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.803164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.803198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.803301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.803333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.803507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.803541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.803709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.803740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.803926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.803958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.804221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.804255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.804427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.804459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.804632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.804664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.804903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.804935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.805057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.805090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.805393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.805462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.805671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.805707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.805837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.805870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.806135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.806169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.873 [2024-12-16 12:59:06.806365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.873 [2024-12-16 12:59:06.806397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.873 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.806525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.806564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.806802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.806833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.806959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.806989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.807224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.807257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.807442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.807473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.807590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.807621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.807723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.807753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.807930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.807961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.808195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.808228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.808339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.808370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.808485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.808516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.808645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.808676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.808863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.808895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.809088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.809131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.809343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.809374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.809572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.809604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.809781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.809813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.810048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.810079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.810257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.810290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.810416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.810448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.810644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.810675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.810912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.810944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.811064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.811096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.811309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.811341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.811454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.811486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.811653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.811685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.811871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.811902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.812030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.812062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.812262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.812295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.812477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.812508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.812681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.812712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.812845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.812877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.813136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.813169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.813340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.813372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.874 [2024-12-16 12:59:06.813542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.874 [2024-12-16 12:59:06.813574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.874 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.813756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.813787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.813917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.813949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.814142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.814175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.814467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.814498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.814635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.814666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.814844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.814881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.815065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.815097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.815290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.815322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.815498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.815529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.815720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.815751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.816008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.816039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.816219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.816252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.816458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.816490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.816672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.816704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.816875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.816906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.817092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.817131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.817328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.817360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.817598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.817628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.817798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.817829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.818019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.818051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.818164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.818197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.818408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.818440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.818632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.818663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.818917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.818948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.819187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.819220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.819410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.819442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.819649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.819681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.819865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.819896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.820076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.820108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.820287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.820319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.820565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.820596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.820865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.820896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.821014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.875 [2024-12-16 12:59:06.821045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.875 qpair failed and we were unable to recover it. 00:37:40.875 [2024-12-16 12:59:06.821180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.821213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.821384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.821416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.821586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.821617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.821734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.821765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.821865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.821896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.822081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.822124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.822223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.822255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.822517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.822549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.822737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.822768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.822898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.822929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.823107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.823160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.823335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.823367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.823554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.823591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.823717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.823750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.824015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.824046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.824184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.824221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.824458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.824490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.824677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.824708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.824839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.824871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.825137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.825169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.825405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.825436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.825626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.825658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.825835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.825866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.826036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.826067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.826324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.826356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.826545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.826577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.826762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.826794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.827052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.827084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.827214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.827246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.827484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.827516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.827687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.827719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.827891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.827922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.828107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.828150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.828357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.828389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.828576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.828607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.876 qpair failed and we were unable to recover it. 00:37:40.876 [2024-12-16 12:59:06.828824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.876 [2024-12-16 12:59:06.828856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.829024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.829056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.829254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.829288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.829487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.829518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.829808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.829840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.830052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.830083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.830280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.830313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.830570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.830602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.830841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.830873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.831061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.831092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.831297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.831329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.831456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.831487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.831673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.831704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.831874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.831905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.832084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.832125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.832237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.832269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.832533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.832565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.832735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.832773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.832954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.832985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.833255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.833289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.833526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.833558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.833828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.833859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.834052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.834083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.834331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.834364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.834482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.834513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.834710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.834742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.834935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.834967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.835147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.835179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.835348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.835380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.835559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.835591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.835773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.835804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.835930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.835962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.836168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.836201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.877 qpair failed and we were unable to recover it. 00:37:40.877 [2024-12-16 12:59:06.836380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.877 [2024-12-16 12:59:06.836411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.836678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.836710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.836839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.836871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.837060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.837091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.837234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.837266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.837440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.837471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.837709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.837741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.837849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.837881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.838136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.838169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.838345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.838376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.838638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.838670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.838782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.838814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.838921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.838952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.839132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.839165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.839400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.839431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.839626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.839657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.839779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.839811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.840052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.840084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.840348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.840381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.840572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.840604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.840790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.840821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.840929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.840961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.841154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.841187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.841304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.841336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.841502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.841539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.841716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.841747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.841997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.842029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.842219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.842252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.842440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.842472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.842662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.842693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.842870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.842902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.843095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.843135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.843257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.843288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.843458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.843489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.843660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.843692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.843884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.843915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.844176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.844208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.844326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.878 [2024-12-16 12:59:06.844358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.878 qpair failed and we were unable to recover it. 00:37:40.878 [2024-12-16 12:59:06.844545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.844577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.844748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.844779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.844903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.844934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.845208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.845241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.845363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.845394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.845531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.845562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.845735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.845767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.845882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.845913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.846034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.846066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.846248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.846281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.846450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.846481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.846728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.846759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.846926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.846958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.847195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.847267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.847472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.847508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.847702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.847735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.847844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.847876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.847987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.848018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.848200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.848237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.848508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.848542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.848716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.848748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.848889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.848921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.849198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.849232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.849436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.849467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.849603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.849635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.849753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.849786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.849974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.850015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.850208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.850243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.850421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.850454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.850697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.850730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.850989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.851021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.851213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.851248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.851450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.851482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.851675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.851707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.851973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.852005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.852137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.852179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.852372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.852404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.852589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.852620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.852889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.879 [2024-12-16 12:59:06.852921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.879 qpair failed and we were unable to recover it. 00:37:40.879 [2024-12-16 12:59:06.853058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.853090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.853302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.853334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.853583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.853615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.853738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.853769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.853870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.853902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.854091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.854131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.854306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.854338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.854541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.854573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.854726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.854757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.854935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.854967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.855152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.855185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.855442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.855474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.855665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.855697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.855888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.855919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.856128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.856172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.856421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.856453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.856640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.856671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.856838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.856870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.857135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.857168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.857281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.857312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.857497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.857528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.857707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.857739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.857867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.857899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.858164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.858197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.858364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.858395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.858604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.858635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.858881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.858912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.859044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.859081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.859270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.859303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.859422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.859453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.859634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.859665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 584931 Killed "${NVMF_APP[@]}" "$@" 00:37:40.880 [2024-12-16 12:59:06.859764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.859796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.859923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.859955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.860217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.860253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:40.880 [2024-12-16 12:59:06.860503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.860536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 [2024-12-16 12:59:06.860729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.860761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.880 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:40.880 [2024-12-16 12:59:06.860963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.880 [2024-12-16 12:59:06.860996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.880 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.861166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:40.881 [2024-12-16 12:59:06.861200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.861371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.861403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:40.881 [2024-12-16 12:59:06.861595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.861628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.881 [2024-12-16 12:59:06.861841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.861874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.861977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.862009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.862111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.862151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.862346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.862378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.862495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.862528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.862712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.862744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.863003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.863035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.863157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.863195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.863324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.863356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.863536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.863568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.863769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.863800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.863990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.864029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.864214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.864249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.864371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.864403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.864642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.864673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.864857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.864888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.865066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.865097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.865279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.865311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.865552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.865584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.865762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.865793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.865978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.866010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.866246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.866279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.866465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.866497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.866616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.866647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.866907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.866938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.867140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.867182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.867375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.867407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.867604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.867636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.867809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.867841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.868062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.868094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 [2024-12-16 12:59:06.868379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.868413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=585631 00:37:40.881 [2024-12-16 12:59:06.868544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.868576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 585631 00:37:40.881 [2024-12-16 12:59:06.868813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.881 [2024-12-16 12:59:06.868846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.881 qpair failed and we were unable to recover it. 00:37:40.881 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:40.882 [2024-12-16 12:59:06.868949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.868981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.869152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.869186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 585631 ']' 00:37:40.882 [2024-12-16 12:59:06.869449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.869481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.882 [2024-12-16 12:59:06.869748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.869783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:40.882 [2024-12-16 12:59:06.869912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.869944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.882 [2024-12-16 12:59:06.870123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.882 [2024-12-16 12:59:06.870156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.870284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.870316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:40.882 [2024-12-16 12:59:06.870429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.870460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.870566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.870597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 12:59:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.882 [2024-12-16 12:59:06.870766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.870798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.871032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.871063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.871258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.871294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.871484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.871515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.871718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.871750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.871888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.871921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.872181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.872214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.872331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.872363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.872575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.872608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.872797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.872829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.872950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.872982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.873168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.873202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.873402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.873434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.873672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.873704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.873883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.873915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.874169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.874202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.874424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.874455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.874639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.874671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.874852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.874885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.875081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.875122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.875243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.875277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:40.882 [2024-12-16 12:59:06.875416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.882 [2024-12-16 12:59:06.875447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:40.882 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.875635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.875668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.875848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.875881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.875999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.876033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.876161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.876195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.876386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.876418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.876618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.876650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.876883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.876916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.877103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.877147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.877337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.877369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.877477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.877516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.877625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.877657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.877782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.877814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.877992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.176 [2024-12-16 12:59:06.878024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.176 qpair failed and we were unable to recover it. 00:37:41.176 [2024-12-16 12:59:06.878296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.878329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.878484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.878516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.878647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.878679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.878871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.878903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.879014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.879047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.879251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.879287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.879460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.879492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.879663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.879695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.879803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.879835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.880028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.880060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.880178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.880211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.880455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.880486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.880659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.880690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.880857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.880889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.881058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.881090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.881270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.881303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.881488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.881519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.881756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.881788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.881979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.882140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.882356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.882500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.882640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.882785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.882921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.882953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.883061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.883092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.883411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.883446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.883732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.883765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.883959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.883991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.884138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.884171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.884356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.884389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.884625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.884657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.884796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.884829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.885063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.885095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.885213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.885245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.885410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.885442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.885548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.885588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.885761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.885793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.177 [2024-12-16 12:59:06.885895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.177 [2024-12-16 12:59:06.885927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.177 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.886109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.886153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.886443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.886475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.886588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.886620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.886734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.886765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.886931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.886963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.887151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.887188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.887317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.887349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.887549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.887581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.887750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.887781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.887967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.887998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.888180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.888213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.888400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.888433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.888541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.888573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.888763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.888795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.888986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.889018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.889235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.889267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.889439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.889471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.889672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.889704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.889823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.889855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.890044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.890076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.890193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.890226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.890340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.890373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.890496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.890528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.890631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.890663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.890924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.890995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.891235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.891303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.891497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.891533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.891683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.891719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.891910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.891942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.892136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.892170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.892339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.892372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.892497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.892529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.892708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.892740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.892989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.893020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.893290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.893324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.178 [2024-12-16 12:59:06.893591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.178 [2024-12-16 12:59:06.893623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.178 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.893809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.893841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.893944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.893976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.894186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.894220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.894418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.894450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.894635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.894671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.894796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.894828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.895035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.895071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.895250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.895284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.895562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.895597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.895745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.895780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.895965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.896010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.896193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.896227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.896489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.896521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.896751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.896783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.896918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.896951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.897225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.897258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.897431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.897462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.897593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.897626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.897810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.897846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.897974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.898007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.898196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.898231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.898410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.898443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.898670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.898703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.898910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.898943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.899138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.899172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.899353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.899386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.899576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.899607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.899792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.899825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.900011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.900067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.900217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.900251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.900445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.900477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.900663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.900695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.900870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.900903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.901080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.901112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.901250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.901282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.901560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.901592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.179 qpair failed and we were unable to recover it. 00:37:41.179 [2024-12-16 12:59:06.901799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.179 [2024-12-16 12:59:06.901831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.902024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.902055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.902255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.902288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.902536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.902569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.902692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.902724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.902896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.902928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.903110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.903159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.903329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.903361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.903534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.903565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.903827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.903859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.904031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.904063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.904188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.904221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.904390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.904423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.904608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.904640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.904828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.904859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.905097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.905139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.905402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.905435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.905676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.905707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.905821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.905853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.906052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.906085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.906346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.906409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.906580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.906621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.906811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.906842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.907029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.907062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.907253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.907289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.907501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.907533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.907636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.907667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.907934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.907965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.908133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.908166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.908334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.908366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.908550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.908581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.908821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.908852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.909123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.909162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.909397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.909428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.909610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.909641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.909763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.909795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.909966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.909997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.910176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.910208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.910337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.180 [2024-12-16 12:59:06.910369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.180 qpair failed and we were unable to recover it. 00:37:41.180 [2024-12-16 12:59:06.910535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.910566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.910686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.910717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.910907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.910939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.911052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.911083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.911295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.911342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.911476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.911510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.911622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.911654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.911831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.911863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.911966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.911998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.912240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.912274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.912376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.912408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.912525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.912556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.912728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.912759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.912948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.912979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.913256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.913289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.913525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.913556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.913733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.913764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.913888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.913919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.914036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.914034] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:41.181 [2024-12-16 12:59:06.914068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 [2024-12-16 12:59:06.914075] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.914250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.914281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.914454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.914484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.914684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.914713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.914880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.914909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.915103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.915146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.915322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.915354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.915544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.915575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.915831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.915863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.916060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.916091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.181 [2024-12-16 12:59:06.916301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.181 [2024-12-16 12:59:06.916334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.181 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.916610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.916643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.916830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.916862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.917137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.917170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.917372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.917410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.917638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.917671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.917933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.917965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.918073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.918106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.918382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.918415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.918604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.918637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.918880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.918912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.919086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.919126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.919299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.919331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.919517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.919549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.919719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.919750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.919941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.919973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.920233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.920267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.920530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.920562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.920675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.920707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.920807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.920839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.921029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.921061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.921194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.921228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.921476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.921507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.921692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.921725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.921856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.921887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.922059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.922091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.922304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.922337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.922459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.922491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.922668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.922699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.922870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.922901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.923096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.923135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.923261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.923297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.923546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.923577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.923694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.923725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.923911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.923943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.924141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.924173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.924359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.924391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.924673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.924705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.924829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.924860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.925028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.182 [2024-12-16 12:59:06.925059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.182 qpair failed and we were unable to recover it. 00:37:41.182 [2024-12-16 12:59:06.925204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.925238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.925421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.925452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.925722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.925753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.925887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.925919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.926133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.926166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.926303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.926335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.926581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.926613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.926855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.926886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.927134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.927167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.927432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.927464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.927591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.927623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.927819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.927850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.928124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.928157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.928338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.928370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.928611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.928642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.928835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.928866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.928981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.929013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.929203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.929236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.929381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.929414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.929607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.929639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.929809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.929842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.930108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.930148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.930273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.930305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.930480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.930512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.930679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.930711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.930949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.930981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.931097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.931137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.931315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.931347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.931584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.931616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.931804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.931835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.932099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.932142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.932341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.932379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.932506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.932539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.932773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.932805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.933010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.933043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.933161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.933195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.933304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.933336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.933504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.933536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.183 [2024-12-16 12:59:06.933718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.183 [2024-12-16 12:59:06.933750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.183 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.933924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.933956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.934075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.934107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.934297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.934329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.934593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.934626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.934732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.934764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.935045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.935077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.935209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.935243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.935416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.935448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.935686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.935719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.935892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.935924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.936097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.936141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.936430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.936462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.936726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.936758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.936930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.936963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.937220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.937254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.937533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.937565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.937832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.937870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.938161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.938194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.938311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.938343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.938535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.938569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.938694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.938725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.938844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.938877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.939060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.939091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.939293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.939326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.939538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.939570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.939768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.939799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.939984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.940030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.940324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.940358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.940613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.940644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.940817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.940848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.940978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.941009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.941197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.941231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.941333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.941372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.941611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.941643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.941814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.941846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.942028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.942060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.942186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.942219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.942457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.942490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.184 qpair failed and we were unable to recover it. 00:37:41.184 [2024-12-16 12:59:06.942602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.184 [2024-12-16 12:59:06.942633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.942834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.942867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.942981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.943014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.943264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.943298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.943424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.943456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.943641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.943674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.943816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.943849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.944042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.944075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.944261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.944295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.944534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.944567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.944848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.944881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.945088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.945130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.945299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.945332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.945523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.945555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.945688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.945720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.945958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.945991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.946106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.946151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.946335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.946368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.946621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.946653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.946820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.946852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.947060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.947092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.947360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.947393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.947599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.947631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.947830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.947862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.948133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.948166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.948373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.948405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.948526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.948557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.948742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.948774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.948874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.948906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.949091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.949144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.949425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.949459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.949626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.949657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.185 [2024-12-16 12:59:06.949829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.185 [2024-12-16 12:59:06.949861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.185 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.950034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.950066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.950261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.950307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.950546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.950578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.950767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.950799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.950927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.950958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.951218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.951251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.951436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.951468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.951651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.951683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.951945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.951976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.952172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.952206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.952321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.952353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.952594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.952626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.952806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.952838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.952984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.953015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.953134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.953167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.953364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.953395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.953597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.953629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.953806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.953838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.953958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.953990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.954227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.954259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.954496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.954527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.954708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.954739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.954981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.955012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.955180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.955214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.955416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.955447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.955578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.955610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.955710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.955742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.955954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.955985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.956122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.956156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.956286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.956318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.956504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.956535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.956711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.956744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.956865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.956897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.957187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.957221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.957413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.957445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.186 qpair failed and we were unable to recover it. 00:37:41.186 [2024-12-16 12:59:06.957612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.186 [2024-12-16 12:59:06.957644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.957813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.957845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.958037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.958069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.958272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.958306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.958416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.958448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.958686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.958717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.958897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.958935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.959072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.959103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.959296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.959329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.959497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.959529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.959719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.959751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.959954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.959986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.960133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.960167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.960365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.960397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.960564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.960595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.960776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.960808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.960985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.961017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.961259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.961294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.961461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.961493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.961605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.961638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.961884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.961916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.962083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.962125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.962287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.962319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.962505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.962537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.962656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.962687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.962952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.962983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.963165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.963199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.963305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.963336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.963538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.963570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.963780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.963812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.964001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.964032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.964202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.964235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.964432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.964463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.964716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.964748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.964917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.964948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.965084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.965126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.965252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.965283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.965536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.965567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.965736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.187 [2024-12-16 12:59:06.965767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.187 qpair failed and we were unable to recover it. 00:37:41.187 [2024-12-16 12:59:06.966004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.966035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.966302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.966335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.966501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.966533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.966777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.966809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.966981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.967013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.967201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.967234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.967349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.967382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.967647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.967685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.967871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.967902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.968107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.968166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.968337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.968369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.968491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.968523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.968655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.968687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.968866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.968898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.969170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.969204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.969394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.969425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.969557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.969587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.969703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.969735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.969972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.970003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.970181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.970214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.970476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.970508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.970698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.970730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.970981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.971012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.971198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.971232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.971364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.971397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.971570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.971601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.971782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.971814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.972054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.972085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.972329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.972394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.972677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.972723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.972869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.972920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.973131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.973165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.973283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.973315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.973613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.973645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.973866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.973905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.974153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.974187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.974384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.974416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.974652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.974683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.974870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.974901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.188 qpair failed and we were unable to recover it. 00:37:41.188 [2024-12-16 12:59:06.975005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.188 [2024-12-16 12:59:06.975036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.975216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.975252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.975421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.975454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.975673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.975704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.975822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.975854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.976050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.976082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.976271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.976305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.976412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.976444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.976547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.976587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.976721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.976752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.976928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.976959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.977077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.977108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.977307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.977339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.977528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.977561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.977767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.977798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.978065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.978096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.978289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.978321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.978490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.978521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.978691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.978723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.978910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.978941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.979136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.979179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.979285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.979316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.979552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.979585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.979694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.979725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.979837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.979869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.980038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.980071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.980201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.980238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.980414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.980448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.980618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.980652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.980839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.980870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.981052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.981083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.981297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.981334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.981470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.981503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.981770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.981801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.982068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.982099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.982320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.982356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.982562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.982594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.982711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.982743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.983011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.983043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.189 qpair failed and we were unable to recover it. 00:37:41.189 [2024-12-16 12:59:06.983303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.189 [2024-12-16 12:59:06.983336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.983552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.983585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.983722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.983754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.983934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.983966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.984259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.984295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.984416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.984447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.984706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.984739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.984928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.984960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.985201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.985234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.985406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.985445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.985726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.985758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.985901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.985932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.986134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.986168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.986302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.986335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.986516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.986548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.986686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.986718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.986887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.986919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.987087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.987128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.987446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.987485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.987625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.987657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.987765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.987797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.987967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.988000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.988123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.988156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.988274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.988307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.988403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.988436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.988697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.988729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.988915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.988947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.989161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.989195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.989322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.989353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.989537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.989569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.989693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.989726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.989978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.190 [2024-12-16 12:59:06.990007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:41.190 [2024-12-16 12:59:06.990010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.190 qpair failed and we were unable to recover it. 00:37:41.190 [2024-12-16 12:59:06.990277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.990311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.990509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.990542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.990722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.990754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.990877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.990909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.991183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.991217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.991338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.991370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.991610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.991641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.991772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.991804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.991996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.992028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.992242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.992275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.992472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.992505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.992613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.992645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.992784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.992817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.993087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.993129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.993244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.993276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.993456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.993488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.993661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.993693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.993901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.993940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.994124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.994158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.994264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.994296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.994484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.994516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.994617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.994650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.994912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.994945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.995137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.995172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.995358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.995392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.995596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.995628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.995802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.995835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.996010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.996044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.996259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.996293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.996410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.996443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.996708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.996741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.996953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.996986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.997248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.997282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.997494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.997528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.997698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.997731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.997901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.997937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.998190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.998228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.998443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.998476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.191 qpair failed and we were unable to recover it. 00:37:41.191 [2024-12-16 12:59:06.998734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.191 [2024-12-16 12:59:06.998766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:06.999022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:06.999055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:06.999267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:06.999303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:06.999496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:06.999531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:06.999646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:06.999679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:06.999861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:06.999893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.000093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.000138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.000309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.000342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.000539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.000571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.000850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.000883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.001134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.001168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.001359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.001392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.001573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.001605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.001799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.001830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.002022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.002054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.002348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.002382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.002504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.002536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.002754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.002785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.002973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.003005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.003197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.003237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.003421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.003453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.003720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.003753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.003926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.003957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.004218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.004252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.004485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.004517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.004622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.004655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.004850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.004883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.005002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.005034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.005210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.005244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.005365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.005397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.005524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.005556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.005681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.005713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.005893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.005925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.006043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.006076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.006239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.006273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.192 qpair failed and we were unable to recover it. 00:37:41.192 [2024-12-16 12:59:07.006482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.192 [2024-12-16 12:59:07.006515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.006688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.006720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.006852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.006885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.007056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.007088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.007275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.007309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.007480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.007513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.007703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.007736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.007859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.007890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.008007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.008040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.008283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.008316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.008442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.008476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.008673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.008710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.008897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.008936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.009158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.009196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.009442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.009479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.009695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.009730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.009972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.010007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.010125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.010160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.010337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.010373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.010558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.010593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.010793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.010827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.010959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.010993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.011099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.011160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.011364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.011398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.011539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.011580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.011709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.011742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.011922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.011955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.012195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.012230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.012408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.012443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.012623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.012655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.012779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.012812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.012983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.013017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.013187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.013222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.013425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.013458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.193 [2024-12-16 12:59:07.013647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.193 [2024-12-16 12:59:07.013678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.193 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.013892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.013925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.014047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.014080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.014301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.014335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.014521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.014555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.014674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.014707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.014820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.014851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.015063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.015095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.015293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.015327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.015586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.015620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.015817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.015850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.015990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.016022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.016134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.016167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.016337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.016370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.016495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.016527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.016701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.016734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.016846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.016878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.017058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.017091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.017287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.017320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.017505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.017539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.017677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.017709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.017823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.017856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.018045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.018078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.018269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.018303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.018551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.018583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.018864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.018896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.019014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.019046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.019230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.019264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.019483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.019515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.019704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.019735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.019885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.019924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.020188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.020222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.020466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.020498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.020643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.020676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.020939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.020972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.021082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.021126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.021228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.021260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.021371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.021403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.021619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.021651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.194 qpair failed and we were unable to recover it. 00:37:41.194 [2024-12-16 12:59:07.021768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.194 [2024-12-16 12:59:07.021800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.021945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.021977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.022107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.022149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.022398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.022431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.022647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.022679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.022867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.022899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.023017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.023049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.023288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.023325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.023565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.023597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.023862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.023893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.024132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.024165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.024291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.024323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.024452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.024485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.024671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.024703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.024815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.024847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.024953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.024986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.025155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.025188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.025408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.025440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.025643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.025676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.025793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.025825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.025948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.025980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.026083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.026125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.026326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.026360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.026468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.026500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.026673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.026705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.026825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.026857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.026963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.026995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.027170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.027204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.027379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.027411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.027611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.027644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.027759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.027791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.027982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.195 [2024-12-16 12:59:07.028020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.195 qpair failed and we were unable to recover it. 00:37:41.195 [2024-12-16 12:59:07.028191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.028226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.028400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.028433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.028621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.028654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.028804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.028837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.029050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.029350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.029516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.029664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029726] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.196 [2024-12-16 12:59:07.029759] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.196 [2024-12-16 12:59:07.029766] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.196 [2024-12-16 12:59:07.029773] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.196 [2024-12-16 12:59:07.029778] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.196 [2024-12-16 12:59:07.029786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.029817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.029978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 [2024-12-16 12:59:07.029890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:37:41.196 [2024-12-16 12:59:07.029961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:41.196 [2024-12-16 12:59:07.030101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.030141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.029962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:37:41.196 [2024-12-16 12:59:07.030379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.030411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.030525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.030556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.030756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.030786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.030906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.030938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.031133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.031169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.031381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.031412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.031538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.031571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.031680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.031712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.031834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.031866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.032056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.032088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.032294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.032327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.032499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.032531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.032718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.032751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.032854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.032886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.032989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.033021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.033218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.033253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.033503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.033535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.033723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.033755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.033868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.033900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.034090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.034133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.034249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.034280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.034394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.034426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.034608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.034640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.034824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.034856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.196 [2024-12-16 12:59:07.035029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.196 [2024-12-16 12:59:07.035062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.196 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.035304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.035359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.035473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.035507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.035709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.035740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.035861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.035892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.035994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.036025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.036288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.036325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.036441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.036474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.036600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.036638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.036741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.036773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.036970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.037002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.037184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.037217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.037459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.037491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.037695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.037727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.037901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.037941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.038059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.038092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.038168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15960b0 (9): Bad file descriptor 00:37:41.197 [2024-12-16 12:59:07.038328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.038364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.038544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.038576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.038742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.038774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.038977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.039010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.039260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.039294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.039467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.039499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.039612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.039643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.039819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.039852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.039961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.039993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.040112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.040166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.040412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.040451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.040642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.040682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.040863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.040895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.041073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.041105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.041228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.041261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.041436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.041468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.041685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.041718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.041901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.041933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.042048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.042079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.042288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.042321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.197 qpair failed and we were unable to recover it. 00:37:41.197 [2024-12-16 12:59:07.042453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.197 [2024-12-16 12:59:07.042486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.042612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.042643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.042812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.042844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.042959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.042991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.043175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.043209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.043397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.043430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.043670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.043702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.043875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.043907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.044129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.044162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.044347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.044379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.044592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.044624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.044738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.044771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.044875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.044907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.045034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.045066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.045363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.045397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.045531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.045563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.045684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.045716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.045883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.045916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.046128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.046182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.046369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.046403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.046579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.046611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.046791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.046824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.046956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.046988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.047175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.047209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.047401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.047434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.047545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.047577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.047766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.047799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.047972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.048004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.048203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.048239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.048353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.048384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.048553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.048585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.048758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.048790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.048908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.048941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.049132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.049166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.049383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.049415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.049605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.049637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.049899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.049931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.050172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.050206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.050477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.050519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.198 [2024-12-16 12:59:07.050725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.198 [2024-12-16 12:59:07.050758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.198 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.050941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.050975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.051170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.051204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.051413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.051446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.051658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.051691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.051819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.051852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.052032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.052070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.052191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.052225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.052412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.052444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.052550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.052582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.052760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.052792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.052990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.053022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.053130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.053164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.053280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.053312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.053412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.053444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.053655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.053687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.053878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.053910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.054175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.054208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.054383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.054415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.054590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.054629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.054890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.054922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.055123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.055157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.055347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.055380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.055487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.055519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.055690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.055722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.055957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.055990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.056184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.056218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.056395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.056427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.056621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.056654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.056833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.056865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.056986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.057018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.199 qpair failed and we were unable to recover it. 00:37:41.199 [2024-12-16 12:59:07.057194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.199 [2024-12-16 12:59:07.057228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.057349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.057382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.057498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.057531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.057641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.057674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.057808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.057842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.057957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.057990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.058165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.058200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.058304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.058336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.058526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.058560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.058677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.058711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.058814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.058847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.059022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.059055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.059180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.059215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.059321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.059354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.059548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.059582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.059848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.059889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.060059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.060093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.060381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.060416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.060679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.060712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.061021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.061057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.061204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.061240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.061418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.061451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.061637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.061670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.061843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.061877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.061979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.062012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.062224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.062259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.062499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.062533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.062657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.062690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.062878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.062912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.063102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.063149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.063267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.063302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.063593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.063627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.063740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.063773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.063960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.063994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.064170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.064204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.064400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.064435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.064626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.200 [2024-12-16 12:59:07.064658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.200 qpair failed and we were unable to recover it. 00:37:41.200 [2024-12-16 12:59:07.064861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.064895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.065070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.065103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.065290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.065323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.065569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.065602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.065867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.065899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.066174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.066208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.066396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.066428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.066713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.066745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.066915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.066947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.067130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.067163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.067370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.067402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.067515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.067547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.067669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.067701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.067881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.067912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.068177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.068210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.068317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.068350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.068616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.068648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.068893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.068926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.069100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.069164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.069295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.069328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.069517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.069549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.069762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.069795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.069929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.069961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.070151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.070187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.070443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.070475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.070666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.070698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.070952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.070984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.071190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.071224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.071416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.071450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.071708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.071741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.071925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.071957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.072222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.072256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.072463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.072495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.072675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.072707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.072915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.072947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.073068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.073100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.201 [2024-12-16 12:59:07.073316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.201 [2024-12-16 12:59:07.073349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.201 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.073612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.073644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.073858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.073890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.074014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.074047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.074248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.074283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.074466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.074498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.074690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.074723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.074910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.074942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.075146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.075180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.075430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.075463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.075564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.075597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.075804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.075836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.076128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.076161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.076287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.076320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.076584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.076616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.076808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.076840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.077027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.077058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.077316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.077350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.077542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.077575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.077825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.077857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.078052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.078085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.078363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.078440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.078600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.078656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.078788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.078821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.079091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.079136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.079348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.079380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.079652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.079685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.079900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.079932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.080130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.080163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.080445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.080476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.080656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.080689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.080790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.080821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.081006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.081038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.081305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.081340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.081633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.081665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.081860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.081893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.082099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.082143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.082317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.082349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.082543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.082576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.202 [2024-12-16 12:59:07.082773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.202 [2024-12-16 12:59:07.082805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.202 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.082935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.082967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.083212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.083246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.083384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.083416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.083631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.083663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.083922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.083954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.084169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.084202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.084380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.084412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.084743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.084776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.085035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.085066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.085266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.085307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.085522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.085555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.085841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.085873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.086046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.086078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.086361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.086395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.086652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.086684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.086816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.086848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.087026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.087058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.087252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.087286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.087397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.087430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.087693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.087726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.087895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.087927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.088101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.088143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.088356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.088389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.088564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.088596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.088715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.088747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.089052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.089085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.089288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.089323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.089596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.089628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.089841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.089873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.089989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.090022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.090314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.090348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.090487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.090519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.090728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.090760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.090967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.090999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.091211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.091248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.203 [2024-12-16 12:59:07.091437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.203 [2024-12-16 12:59:07.091469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.203 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.091651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.091691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.091821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.091853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.092018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.092050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.092243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.092276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.092411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.092442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.092615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.092648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.092836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.092868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.093049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.093082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.093306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.093343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.093461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.093494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.093630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.093661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.093765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.093797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.093986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.094018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.094131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.094164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.094295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.094327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.094586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.094618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.094719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.094751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.094868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.094899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.095072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.095105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.095306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.095338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.095507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.095539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.095731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.095763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.096024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.096056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.096256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.096289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.096406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.096437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.096547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.096578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.096712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.096744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.097027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.097064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.097363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.097397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.097569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.097602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.097804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.097836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.098005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.098037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.098163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.098198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.204 [2024-12-16 12:59:07.098319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.204 [2024-12-16 12:59:07.098351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.204 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.098476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.098508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.098643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.098675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.098930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.098962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.099075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.099106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.099298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.099330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.099545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.099577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.099763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.099795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.099975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.100009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.100213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.100246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.100533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.100566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.100702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.100734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.100837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.100869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.100985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.101017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.101287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.101321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.101445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.101477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.101653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.101685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.101863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.101895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.102013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.102045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.102312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.102346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.102528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.102561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.102743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.102781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.103063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.103095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.103226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.103260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.103447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.103478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.103668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.103700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.103893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.103925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.104110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.104149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.104331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.104363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.104537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.104569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.104680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.104712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.104951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.104984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.105105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.105160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.105267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.205 [2024-12-16 12:59:07.105299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.205 qpair failed and we were unable to recover it. 00:37:41.205 [2024-12-16 12:59:07.105567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.105598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.105869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.105902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.106110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.106154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.106277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.106309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.106567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.106599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.106787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.106819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.106944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.106976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.107089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.107131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.107390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.107423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.107663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.107695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.107885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.107917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.108023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.108055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.108187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.108220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.108335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.108367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.108495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.108527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.108724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.108755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.108870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.108902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.109091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.109135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.109312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.109345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.109457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.109489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.109682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.109713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.109920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.109952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.110096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.110139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.110325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.110357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.110633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.110666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.110860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.110892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.111071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.111103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.111253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.111286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.111475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.111518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.111722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.111752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.111871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.111903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.112019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.112050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.112187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.112220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.112323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.112354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.112615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.206 [2024-12-16 12:59:07.112646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.206 qpair failed and we were unable to recover it. 00:37:41.206 [2024-12-16 12:59:07.112769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.112800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.112968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.113000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.113240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.113274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.113461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.113493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.113701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.113733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.113853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.113885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.114058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.114096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.114234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.114268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.114505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.114538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.114718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.114749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.114873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.114906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.115184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.115218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.115406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.115438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.115607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.115638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.115751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.115782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.116081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.116121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.116308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.116339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.116456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.116487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.116612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.116644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.116819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.116849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.117027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.117059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.117186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.117219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.117339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.117370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.117556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.117587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.117777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.117808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.117912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.117943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.118066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.118097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.118212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.118243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.118410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.118441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.118702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.118732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.118918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.118949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.119065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.119097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.119212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.119245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.119369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.119401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.119576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.119609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.119724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.119756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.119854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.119887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.120070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.207 [2024-12-16 12:59:07.120102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.207 qpair failed and we were unable to recover it. 00:37:41.207 [2024-12-16 12:59:07.120306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.120338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.120518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.120550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.120736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.120768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.120961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.120992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.121109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.121153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.121270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.121302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.121541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.121572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.121840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.121871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.122057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.122094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.122225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.122258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.122374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.122405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.122607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.122638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.122808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.122841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.122948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.122980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.123097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.123140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.123380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.123412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.123672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.123703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.123904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.123935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.124185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.124218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.124348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.124380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.124618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.124649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.124817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.124850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.124963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.124995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.125110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.125161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.125396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.125428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.125667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.125698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.125965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.125996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.126250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.126283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.126449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.126482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:41.208 [2024-12-16 12:59:07.126613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.126644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:41.208 [2024-12-16 12:59:07.126911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.126944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.127190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.127223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:41.208 [2024-12-16 12:59:07.127501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.208 [2024-12-16 12:59:07.127532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.208 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:41.208 qpair failed and we were unable to recover it. 00:37:41.208 [2024-12-16 12:59:07.127708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.127745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.209 [2024-12-16 12:59:07.127940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.127973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.128239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.128272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.128460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.128491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.128740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.128772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.128959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.128991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.129177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.129209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.129330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.129362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.129481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.129512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.129707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.129739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.129856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.129889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.130013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.130045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.130182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.130215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.130402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.130439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.130626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.130659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.130843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.130875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.131084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.131126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.131248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.131280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.131393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.131424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.131538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.131570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.131762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.131794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.132034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.132067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.132328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.132361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.132548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.132579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.132761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.132793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.133033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.133064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.133276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.133309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.133571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.133603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.133890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.133922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.134104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.134148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.134363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.209 [2024-12-16 12:59:07.134397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.209 qpair failed and we were unable to recover it. 00:37:41.209 [2024-12-16 12:59:07.134518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.134550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.134683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.134715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.134955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.134987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.135228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.135262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.135387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.135421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.135683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.135715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.135828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.135860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.135978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.136010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.136138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.136170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.136313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.136345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.136521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.136554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.136680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.136711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.136827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.136859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.137072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.137104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.137312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.137343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.137454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.137487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.137617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.137650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.137848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.137881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.138058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.138090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.138304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.138336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.138594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.138627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.138741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.138774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.138899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.138936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.139111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.139153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.139329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.139360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.139553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.139585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.139780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.139812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.139922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.139954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.140136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.140168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.140337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.140369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.140610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.140643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.140812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.140845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.141032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.141063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.141343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.141376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.141615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.141647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.141848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.141880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.142080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.142112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.210 qpair failed and we were unable to recover it. 00:37:41.210 [2024-12-16 12:59:07.142226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.210 [2024-12-16 12:59:07.142259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.142442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.142473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.142587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.142618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.142719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.142752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.142851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.142883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.143008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.143040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.143214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.143247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.143420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.143453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.143559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.143591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.143762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.143793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.143973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.144005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.144217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.144251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.144485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.144537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.144725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.144758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.144942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.144976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.145084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.145127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.145240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.145272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.145487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.145519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.145803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.145834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.145999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.146031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.146219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.146255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.146376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.146408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.146611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.146643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.146764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.146797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.146918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.146949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.147078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.147130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.147398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.147430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.147602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.147634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.147741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.147772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.147875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.147907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.148012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.148044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.148161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.148194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.148382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.148414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.148518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.148549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.148660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.148691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.148925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.148957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.149135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.149167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.149268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.149300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.211 [2024-12-16 12:59:07.149421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.211 [2024-12-16 12:59:07.149452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.211 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.149569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.149602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.149839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.149871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.150045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.150076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.150201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.150238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.150428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.150461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.150647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.150679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.150814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.150846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.151094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.151141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.151270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.151302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.151423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.151455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.151557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.151589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.151716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.151748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.151875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.151908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.152131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.152175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.152288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.152321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.152427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.152460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.152641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.152673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.152787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.152820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.153014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.153047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.153170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.153204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.153312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.153344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.153520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.153552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.153721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.153753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.153860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.153892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.154085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.154126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.154252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.154285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.154406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.154438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.154587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.154619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.154739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.154771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.154901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.154932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.155048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.155080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.155199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.155232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.155424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.155456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.155559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.155591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.155704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.155736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.155914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.155946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.156060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.212 [2024-12-16 12:59:07.156092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.212 qpair failed and we were unable to recover it. 00:37:41.212 [2024-12-16 12:59:07.156270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.156303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.156415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.156448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.156634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.156666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.156835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.156872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.157000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.157033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.157214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.157247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.157437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.157470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.157660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.157693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.157832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.157864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.157972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.158128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.158288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.158456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.158594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.158729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.158940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.158973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.159081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.159122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.159248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.159282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.159388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.159420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.159592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.159625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.159801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.159833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.159939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.159970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.160153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.160187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.160313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.160345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.160464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.160497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.160614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.160646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.160833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.160865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.160973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.161005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.161174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.161208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.161332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.161364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.161486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.161525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.161660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.161692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.161815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.161847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.162034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.162066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.162179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.162211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.162382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.162414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.213 qpair failed and we were unable to recover it. 00:37:41.213 [2024-12-16 12:59:07.162602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.213 [2024-12-16 12:59:07.162634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.162801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.162833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:41.214 [2024-12-16 12:59:07.163023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.163056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.163237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:41.214 [2024-12-16 12:59:07.163271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.163384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.163416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.163519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.163553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.214 [2024-12-16 12:59:07.163730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.163768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.214 [2024-12-16 12:59:07.164005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.164039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.164180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.164214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.164348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.164380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.164487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.164519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.164624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.164657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.164830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.164862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.164972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.165004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.165178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.165212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.165383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.165415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.165555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.165587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.165697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.165729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.165917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.165949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.166058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.166090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.166225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.166259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.166444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.166476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.166726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.166758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.166949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.166982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.167250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.167283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.167456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.167489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.167611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.167644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.167763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.167794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.167978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.168010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.168138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.168172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.168293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.168325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.168428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.168460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.168573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.168604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.168718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.214 [2024-12-16 12:59:07.168758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.214 qpair failed and we were unable to recover it. 00:37:41.214 [2024-12-16 12:59:07.168957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.168989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.169093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.169137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.169245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.169276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.169447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.169479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.169596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.169627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.169730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.169761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.169996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.170027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.170154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.170186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.170289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.170320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.170420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.170451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.170712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.170743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.170858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.170888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.171045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.171205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.171359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.171496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.171638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.171845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.171970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.172002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.172103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.172147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.172321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.172352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.172461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.172492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.172728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.172761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.172951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.172982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.173087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.173129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.173239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.173271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.173449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.173480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.173601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.173633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.173805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.173836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.174003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.174035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.174227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.174261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.174376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.174407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.174517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.174548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.174736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.174767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.174869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.174899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.175069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.175101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.175299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.175334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.175454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.175486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.215 [2024-12-16 12:59:07.175589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.215 [2024-12-16 12:59:07.175621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.215 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.175737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.175770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.175875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.175907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.176104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.176147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.176272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.176305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.176410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.176442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.176570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.176603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.176730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.176762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.176931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.176963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.177146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.177181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.177292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.177325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.177431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.177464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.177597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.177630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.177804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.177837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.178021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.178060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.178246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.178281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.178403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.178439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.178552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.178585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.178699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.178731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.178907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.178940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.179043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.179075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.179311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.179359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.179481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.179513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.179700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.179732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.179941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.179974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.180145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.180180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.180374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.180408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.180696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.180728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.180842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.180874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.180999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.181153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.181383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.181522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.181675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 Malloc0 00:37:41.216 [2024-12-16 12:59:07.181823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.181957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.181987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.182089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.216 [2024-12-16 12:59:07.182134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.216 qpair failed and we were unable to recover it. 00:37:41.216 [2024-12-16 12:59:07.182318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.182351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.182464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.182497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.217 [2024-12-16 12:59:07.182605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.182637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.182807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.182839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:41.217 [2024-12-16 12:59:07.183020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.183052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.217 [2024-12-16 12:59:07.183289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.183326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.183444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.183476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.217 [2024-12-16 12:59:07.183661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.183693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.183805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.183837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.184045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.184077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.184285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.184319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.184443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.184474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.184578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.184609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.184845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.184876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.185135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.185169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.185270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.185301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.185409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.185441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.185618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.185649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.185820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.185851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.186046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.186077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.186220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.186253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.186428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.186459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.186695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.186727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.186898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.186929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.187183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.187220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.187404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.187435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.187621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.187651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.187766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.187797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.187971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.188003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.188135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.188179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.188403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.188459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.188656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.188687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.188800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.188829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.188941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.188970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.189227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.217 [2024-12-16 12:59:07.189259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.217 qpair failed and we were unable to recover it. 00:37:41.217 [2024-12-16 12:59:07.189358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.189386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 [2024-12-16 12:59:07.189383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.189490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.189518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.189682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.189711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.189808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.189836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.189931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.189960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.190055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.190083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.190221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.190252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e0000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.190457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.190493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.190667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.190698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.190832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.190864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.191054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.191086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.191312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.191368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84e4000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.191499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.191537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.191710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.191742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.191859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.191891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.192025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.192056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.192244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.192276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.192449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.192480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.192582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.192611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.192793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.192825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.192930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.192961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.193083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.193124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.193385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.193417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.193533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.193564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.193733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.193772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.193947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.193979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.194168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.194200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.194382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.194415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.194596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.194628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.194863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.194894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.195143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.195209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.195400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.195433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.195541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.195573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.195752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.218 [2024-12-16 12:59:07.195784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.218 qpair failed and we were unable to recover it. 00:37:41.218 [2024-12-16 12:59:07.196009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.196042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.196143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.196175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.196349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.196381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.196487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.196520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.196770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.196802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.197064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.197096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.197222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.197252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.197431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.197463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.197698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.197730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.197834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.197866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.198034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.198066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.219 [2024-12-16 12:59:07.198198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.198229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.198352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.198384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:41.219 [2024-12-16 12:59:07.198650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.198683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.219 [2024-12-16 12:59:07.198854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.198886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.199060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.199094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84ec000b90 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.219 [2024-12-16 12:59:07.199305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.199341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.199532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.199564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.199764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.199797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.199994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.200029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.200247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.200280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.200417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.200449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.200572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.200603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.200714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.200747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.200953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.200985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.201095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.201138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.201317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.201349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.201536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.201567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.201748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.201779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.201971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.202002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.202184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.202217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.202490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.202522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.202629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.202661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.202848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.202879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.203050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.203082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.219 [2024-12-16 12:59:07.203353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.219 [2024-12-16 12:59:07.203386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.219 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.203576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.203607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.203743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.203775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.203962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.204000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.204134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.204166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.204356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.204387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.204554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.204586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.204693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.204725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.204908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.204939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.205134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.205168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.205407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.205439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.205611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.205642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.205821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.205853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.206030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.206062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.220 [2024-12-16 12:59:07.206282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.206316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:41.220 [2024-12-16 12:59:07.206564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.206597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.206710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.206742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.220 [2024-12-16 12:59:07.206928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.206960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.207085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.220 [2024-12-16 12:59:07.207126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.207251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.207283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.207544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.207576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.207842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.207873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.208055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.208086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.208308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.208341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.208532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.208565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.208758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.208790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.208981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.209013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.209206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.209240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.209355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.209386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.209582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.209615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.209880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.209912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.210090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.210129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.210306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.210338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.210527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.210559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.210741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.210772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.210902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.210934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.211106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.211158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.211400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.211432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.211544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.211575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.211762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.211793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.220 [2024-12-16 12:59:07.211962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.220 [2024-12-16 12:59:07.211994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.220 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.212095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.212137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.212389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.212421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.212546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.212577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.212696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.212728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.212900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.212932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.213129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.213162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.213269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.213301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.213423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.213454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.213627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.213659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.213921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.213953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.221 [2024-12-16 12:59:07.214226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.214260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.214384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.214416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:41.221 [2024-12-16 12:59:07.214550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.214582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.214697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.214728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.221 [2024-12-16 12:59:07.214908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.214941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.215087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.215127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.221 [2024-12-16 12:59:07.215320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.215352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.215543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.215574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.215761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.215793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.215977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.216009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.216179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.216212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.216384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.216416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.216678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.216709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.216929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.216960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.217078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.217111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.217261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.217293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.221 [2024-12-16 12:59:07.217403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.221 [2024-12-16 12:59:07.217441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1588110 with addr=10.0.0.2, port=4420 00:37:41.221 qpair failed and we were unable to recover it. 00:37:41.482 [2024-12-16 12:59:07.217614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.482 [2024-12-16 12:59:07.220006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.220124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.482 [2024-12-16 12:59:07.220167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.482 [2024-12-16 12:59:07.220189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.482 [2024-12-16 12:59:07.220209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.482 [2024-12-16 12:59:07.220257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.482 qpair failed and we were unable to recover it. 00:37:41.482 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.482 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:41.482 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.482 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.482 [2024-12-16 12:59:07.229988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.230078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.482 [2024-12-16 12:59:07.230122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.482 [2024-12-16 12:59:07.230143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.482 [2024-12-16 12:59:07.230162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.482 [2024-12-16 12:59:07.230202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.482 qpair failed and we were unable to recover it. 00:37:41.482 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.482 12:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 584953 00:37:41.482 [2024-12-16 12:59:07.239967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.240049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.482 [2024-12-16 12:59:07.240073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.482 [2024-12-16 12:59:07.240085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.482 [2024-12-16 12:59:07.240097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.482 [2024-12-16 12:59:07.240127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.482 qpair failed and we were unable to recover it. 00:37:41.482 [2024-12-16 12:59:07.249976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.250038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.482 [2024-12-16 12:59:07.250058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.482 [2024-12-16 12:59:07.250066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.482 [2024-12-16 12:59:07.250074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.482 [2024-12-16 12:59:07.250092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.482 qpair failed and we were unable to recover it. 00:37:41.482 [2024-12-16 12:59:07.259951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.260004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.482 [2024-12-16 12:59:07.260017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.482 [2024-12-16 12:59:07.260024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.482 [2024-12-16 12:59:07.260030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.482 [2024-12-16 12:59:07.260044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.482 qpair failed and we were unable to recover it. 00:37:41.482 [2024-12-16 12:59:07.269975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.270029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.482 [2024-12-16 12:59:07.270043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.482 [2024-12-16 12:59:07.270049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.482 [2024-12-16 12:59:07.270056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.482 [2024-12-16 12:59:07.270069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.482 qpair failed and we were unable to recover it. 00:37:41.482 [2024-12-16 12:59:07.280002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.482 [2024-12-16 12:59:07.280055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.280068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.280075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.280081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.280094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.290044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.290105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.290124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.290130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.290144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.290158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.300083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.300136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.300150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.300157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.300162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.300177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.310176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.310228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.310242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.310248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.310254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.310268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.320140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.320226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.320239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.320245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.320251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.320264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.330141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.330197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.330210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.330216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.330222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.330236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.340211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.340270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.340287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.340294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.340299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.340313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.350200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.350252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.350265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.350271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.350277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.350290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.360220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.360273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.360286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.360293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.360299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.360312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.370244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.370299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.370312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.370318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.370324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.370338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.380293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.380347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.380360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.380366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.380376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.380390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.390303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.390358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.390370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.390376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.390382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.390396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.400324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.400374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.400389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.400395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.400401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.483 [2024-12-16 12:59:07.400416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.483 qpair failed and we were unable to recover it. 00:37:41.483 [2024-12-16 12:59:07.410359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.483 [2024-12-16 12:59:07.410417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.483 [2024-12-16 12:59:07.410431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.483 [2024-12-16 12:59:07.410437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.483 [2024-12-16 12:59:07.410443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.410456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.420370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.420429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.420442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.420449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.420455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.420468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.430461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.430518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.430532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.430538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.430544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.430557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.440438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.440491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.440504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.440511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.440517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.440531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.450459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.450524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.450537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.450544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.450550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.450562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.460528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.460580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.460593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.460600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.460606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.460619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.470513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.470567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.470580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.470586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.470595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.470609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.480466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.480519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.480532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.480538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.480544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.480557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.490571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.490626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.490639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.490645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.490651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.490665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.500594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.500647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.500664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.500671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.500677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.500692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.510621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.510671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.510684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.510690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.510697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.510711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.520715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.520768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.520781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.520787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.520793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.520807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.530670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.530727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.530740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.530747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.530753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.530766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.484 [2024-12-16 12:59:07.540698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.484 [2024-12-16 12:59:07.540756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.484 [2024-12-16 12:59:07.540769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.484 [2024-12-16 12:59:07.540776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.484 [2024-12-16 12:59:07.540781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.484 [2024-12-16 12:59:07.540795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.484 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.550747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.550821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.550835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.550841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.550847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.550861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.560716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.560776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.560789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.560795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.560804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.560818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.570780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.570834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.570847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.570853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.570859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.570872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.580804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.580858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.580871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.580877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.580883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.580896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.590839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.590894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.590907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.590913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.590919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.590932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.600861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.600921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.600935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.600941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.600947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.600961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.610904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.610972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.610986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.610992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.610998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.611011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.620929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.620984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.620998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.621004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.621011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.621024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.630949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.630996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.631009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.631015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.631020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.631034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.640989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.641042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.641056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.641062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.641069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.641082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.651019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.746 [2024-12-16 12:59:07.651081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.746 [2024-12-16 12:59:07.651094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.746 [2024-12-16 12:59:07.651104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.746 [2024-12-16 12:59:07.651110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.746 [2024-12-16 12:59:07.651128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.746 qpair failed and we were unable to recover it. 00:37:41.746 [2024-12-16 12:59:07.661038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.661092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.661105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.661111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.661121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.661135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.671067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.671121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.671134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.671141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.671147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.671160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.681083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.681134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.681147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.681153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.681159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.681172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.691147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.691203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.691215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.691222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.691228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.691242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.701193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.701299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.701313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.701319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.701325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.701339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.711173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.711243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.711256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.711262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.711268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.711282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.721219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.721269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.721283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.721289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.721295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.721310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.731240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.731296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.731309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.731315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.731321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.731335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.741277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.741329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.741342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.741351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.741357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.741371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.751306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.751359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.751372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.751378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.751384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.751398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.761341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.761392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.761405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.761412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.761418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.761431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.771362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.771419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.771432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.771438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.771444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.771458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.781381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.747 [2024-12-16 12:59:07.781433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.747 [2024-12-16 12:59:07.781446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.747 [2024-12-16 12:59:07.781452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.747 [2024-12-16 12:59:07.781458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.747 [2024-12-16 12:59:07.781472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.747 qpair failed and we were unable to recover it. 00:37:41.747 [2024-12-16 12:59:07.791418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.748 [2024-12-16 12:59:07.791474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.748 [2024-12-16 12:59:07.791488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.748 [2024-12-16 12:59:07.791494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.748 [2024-12-16 12:59:07.791500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.748 [2024-12-16 12:59:07.791513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.748 qpair failed and we were unable to recover it. 00:37:41.748 [2024-12-16 12:59:07.801379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:41.748 [2024-12-16 12:59:07.801428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:41.748 [2024-12-16 12:59:07.801442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:41.748 [2024-12-16 12:59:07.801450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:41.748 [2024-12-16 12:59:07.801458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:41.748 [2024-12-16 12:59:07.801472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:41.748 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.811473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.811529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.811543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.811550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.811556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.811570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.821459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.821516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.821530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.821537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.821542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.821556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.831516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.831570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.831583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.831593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.831599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.831613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.841482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.841541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.841554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.841560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.841566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.841580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.851578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.851636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.851650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.851656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.851662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.851676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.861625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.861681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.861695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.861702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.861708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.861722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.871597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.871648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.871661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.871668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.871674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.871688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.881689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.881739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.881752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.881758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.881764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.881778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.891657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.891758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.891771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.891777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.891783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.891796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.901771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.901824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.901837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.901844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.901850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.901863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.009 [2024-12-16 12:59:07.911782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.009 [2024-12-16 12:59:07.911872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.009 [2024-12-16 12:59:07.911885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.009 [2024-12-16 12:59:07.911892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.009 [2024-12-16 12:59:07.911897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.009 [2024-12-16 12:59:07.911911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.009 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.921719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.921773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.921785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.921795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.921801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.921814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.931760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.931831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.931846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.931852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.931859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.931873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.941881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.941938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.941952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.941958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.941964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.941979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.951899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.951950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.951963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.951969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.951975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.951989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.961902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.961952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.961965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.961971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.961977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.961992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.971924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.971980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.971993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.971999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.972005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.972018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.981993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.982055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.982068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.982075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.982081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.982094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:07.991990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:07.992046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:07.992059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:07.992065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:07.992071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:07.992085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:08.002015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:08.002069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:08.002083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:08.002089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:08.002095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:08.002109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:08.012039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:08.012096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:08.012109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:08.012131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:08.012137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:08.012151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:08.022033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:08.022125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:08.022139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:08.022145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:08.022151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:08.022164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:08.032085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:08.032189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:08.032202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:08.032208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:08.032214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:08.032228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:08.042202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:08.042260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:08.042273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.010 [2024-12-16 12:59:08.042279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.010 [2024-12-16 12:59:08.042285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.010 [2024-12-16 12:59:08.042299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.010 qpair failed and we were unable to recover it. 00:37:42.010 [2024-12-16 12:59:08.052238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.010 [2024-12-16 12:59:08.052294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.010 [2024-12-16 12:59:08.052307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.011 [2024-12-16 12:59:08.052313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.011 [2024-12-16 12:59:08.052319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.011 [2024-12-16 12:59:08.052333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.011 qpair failed and we were unable to recover it. 00:37:42.011 [2024-12-16 12:59:08.062222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.011 [2024-12-16 12:59:08.062276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.011 [2024-12-16 12:59:08.062289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.011 [2024-12-16 12:59:08.062295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.011 [2024-12-16 12:59:08.062301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.011 [2024-12-16 12:59:08.062315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.011 qpair failed and we were unable to recover it. 00:37:42.011 [2024-12-16 12:59:08.072242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.011 [2024-12-16 12:59:08.072299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.011 [2024-12-16 12:59:08.072312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.011 [2024-12-16 12:59:08.072318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.011 [2024-12-16 12:59:08.072324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.011 [2024-12-16 12:59:08.072337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.011 qpair failed and we were unable to recover it. 00:37:42.271 [2024-12-16 12:59:08.082152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.271 [2024-12-16 12:59:08.082203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.271 [2024-12-16 12:59:08.082216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.271 [2024-12-16 12:59:08.082223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.082229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.082243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.092282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.092384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.092397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.092403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.092409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.092422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.102294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.102349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.102370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.102377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.102383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.102397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.112295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.112350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.112364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.112370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.112376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.112389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.122344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.122395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.122408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.122415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.122421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.122434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.132351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.132408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.132423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.132430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.132436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.132450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.142400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.142459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.142472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.142478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.142484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.142498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.152430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.152480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.152493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.152500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.152506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.152520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.162442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.162495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.162508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.162514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.162520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.162534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.172505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.172564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.172577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.172584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.172590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.172603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.182506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.182562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.182575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.182581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.182587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.182602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.192526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.192593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.192609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.192615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.192621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.192634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.202489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.202540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.202554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.202560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.202566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.272 [2024-12-16 12:59:08.202579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.272 qpair failed and we were unable to recover it. 00:37:42.272 [2024-12-16 12:59:08.212568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.272 [2024-12-16 12:59:08.212626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.272 [2024-12-16 12:59:08.212639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.272 [2024-12-16 12:59:08.212645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.272 [2024-12-16 12:59:08.212651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.212665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.222557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.222611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.222623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.222630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.222635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.222649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.232675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.232731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.232744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.232750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.232756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.232773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.242683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.242753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.242767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.242773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.242779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.242793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.252706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.252777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.252790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.252796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.252802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.252815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.262739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.262794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.262808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.262814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.262820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.262833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.272791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.272841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.272854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.272860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.272866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.272880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.282781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.282856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.282872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.282878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.282884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.282897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.292815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.292873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.292885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.292891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.292897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.292910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.302830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.302882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.302895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.302902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.302908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.302922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.312887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.312955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.312969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.312975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.312981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.312994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.322889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.322946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.322959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.322966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.322971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.322988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.273 [2024-12-16 12:59:08.332955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.273 [2024-12-16 12:59:08.333022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.273 [2024-12-16 12:59:08.333035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.273 [2024-12-16 12:59:08.333041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.273 [2024-12-16 12:59:08.333047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.273 [2024-12-16 12:59:08.333061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.273 qpair failed and we were unable to recover it. 00:37:42.534 [2024-12-16 12:59:08.342993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.534 [2024-12-16 12:59:08.343057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.534 [2024-12-16 12:59:08.343071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.534 [2024-12-16 12:59:08.343078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.534 [2024-12-16 12:59:08.343083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.534 [2024-12-16 12:59:08.343097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.534 qpair failed and we were unable to recover it. 00:37:42.534 [2024-12-16 12:59:08.353003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.534 [2024-12-16 12:59:08.353067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.534 [2024-12-16 12:59:08.353081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.534 [2024-12-16 12:59:08.353087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.534 [2024-12-16 12:59:08.353093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.534 [2024-12-16 12:59:08.353106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.534 qpair failed and we were unable to recover it. 00:37:42.534 [2024-12-16 12:59:08.362933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.534 [2024-12-16 12:59:08.363029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.534 [2024-12-16 12:59:08.363042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.534 [2024-12-16 12:59:08.363048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.534 [2024-12-16 12:59:08.363053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.534 [2024-12-16 12:59:08.363067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.534 qpair failed and we were unable to recover it. 00:37:42.534 [2024-12-16 12:59:08.373035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.534 [2024-12-16 12:59:08.373089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.373104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.373110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.373119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.373134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.383063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.383123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.383137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.383143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.383149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.383163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.393091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.393147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.393160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.393167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.393173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.393187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.403105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.403190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.403203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.403209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.403215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.403229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.413194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.413256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.413270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.413276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.413282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.413299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.423184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.423240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.423253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.423260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.423267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.423280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.433195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.433274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.433287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.433294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.433300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.433313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.443265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.443321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.443335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.443341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.443348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.443362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.453273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.453336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.453349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.453355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.453361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.453374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.463321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.463384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.463400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.463407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.463413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.463427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.473375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.473433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.473445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.473451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.473458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.473471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.483356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.483405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.483418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.483424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.483430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.483443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.493429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.493484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.493497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.493502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.493508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.493521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.535 [2024-12-16 12:59:08.503468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.535 [2024-12-16 12:59:08.503572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.535 [2024-12-16 12:59:08.503588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.535 [2024-12-16 12:59:08.503595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.535 [2024-12-16 12:59:08.503604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.535 [2024-12-16 12:59:08.503619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.535 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.513498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.513548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.513562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.513568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.513574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.513587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.523490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.523542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.523556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.523562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.523568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.523582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.533561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.533660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.533673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.533679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.533685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.533698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.543576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.543632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.543645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.543651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.543657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.543671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.553582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.553632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.553649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.553655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.553661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.553675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.563531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.563584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.563598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.563604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.563610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.563624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.573641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.573695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.573708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.573715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.573721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.573734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.583647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.583699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.583712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.583719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.583724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.583738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.536 [2024-12-16 12:59:08.593618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.536 [2024-12-16 12:59:08.593674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.536 [2024-12-16 12:59:08.593687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.536 [2024-12-16 12:59:08.593693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.536 [2024-12-16 12:59:08.593702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.536 [2024-12-16 12:59:08.593715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.536 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.603736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.603798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.603811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.603817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.603823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.603837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.613753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.613807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.613820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.613826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.613832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.613846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.623785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.623836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.623849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.623855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.623861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.623874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.633802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.633892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.633905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.633912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.633917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.633931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.643834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.643892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.643905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.643911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.643917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.643930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.653914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.653972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.653986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.653992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.653998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.654012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.663889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.663986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.663999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.664005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.664011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.664025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.673917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.673967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.673980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.673987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.673993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.674006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.683941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.684016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.684029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.684036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.684045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.684059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.693981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.694038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.694051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.694057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.694063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.694077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.704024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.704084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.704098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.797 [2024-12-16 12:59:08.704104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.797 [2024-12-16 12:59:08.704110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.797 [2024-12-16 12:59:08.704130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.797 qpair failed and we were unable to recover it. 00:37:42.797 [2024-12-16 12:59:08.714030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.797 [2024-12-16 12:59:08.714085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.797 [2024-12-16 12:59:08.714098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.714104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.714111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.714129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.724078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.724132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.724145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.724152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.724158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.724172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.734085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.734149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.734162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.734169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.734174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.734187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.744207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.744268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.744283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.744289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.744295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.744309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.754192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.754242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.754255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.754261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.754267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.754280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.764109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.764167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.764180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.764187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.764193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.764207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.774207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.774261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.774273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.774279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.774288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.774302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.784162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.784214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.784226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.784232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.784238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.784252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.794279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.794337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.794350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.794356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.794362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.794376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.804288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.804341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.804354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.804360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.804366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.804380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.814323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.814379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.814392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.814399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.814404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.814418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.824344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.824405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.824418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.824424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.824431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.824444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.834419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.834475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.834488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.834494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.834500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.834513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.844410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.844463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.798 [2024-12-16 12:59:08.844476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.798 [2024-12-16 12:59:08.844482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.798 [2024-12-16 12:59:08.844488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.798 [2024-12-16 12:59:08.844501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.798 qpair failed and we were unable to recover it. 00:37:42.798 [2024-12-16 12:59:08.854428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:42.798 [2024-12-16 12:59:08.854482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:42.799 [2024-12-16 12:59:08.854495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:42.799 [2024-12-16 12:59:08.854501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:42.799 [2024-12-16 12:59:08.854507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:42.799 [2024-12-16 12:59:08.854521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.799 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.864472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.864527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.864540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.864550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.864556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.864570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.874496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.874550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.874563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.874569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.874575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.874588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.884518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.884573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.884586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.884592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.884598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.884612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.894552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.894605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.894617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.894623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.894629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.894642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.904581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.904634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.904647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.904653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.904659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.904673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.914594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.914667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.914680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.914686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.914692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.914706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.924540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.924617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.924630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.924636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.924642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.924655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.934655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.059 [2024-12-16 12:59:08.934710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.059 [2024-12-16 12:59:08.934723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.059 [2024-12-16 12:59:08.934729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.059 [2024-12-16 12:59:08.934735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.059 [2024-12-16 12:59:08.934749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.059 qpair failed and we were unable to recover it. 00:37:43.059 [2024-12-16 12:59:08.944709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:08.944778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:08.944791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:08.944797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:08.944803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:08.944816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:08.954703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:08.954754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:08.954767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:08.954776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:08.954782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:08.954795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:08.964731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:08.964779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:08.964793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:08.964799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:08.964805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:08.964819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:08.974767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:08.974822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:08.974835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:08.974841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:08.974847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:08.974860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:08.984784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:08.984835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:08.984848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:08.984854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:08.984861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:08.984874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:08.994820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:08.994877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:08.994890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:08.994896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:08.994902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:08.994915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.004848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.004901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.004914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.004920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.004926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.004940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.014875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.014928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.014942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.014948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.014954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.014968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.024902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.024956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.024970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.024976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.024982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.024997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.034928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.034982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.034995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.035001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.035007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.035021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.044932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.044988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.045001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.045010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.045016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.045029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.054980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.055040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.055053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.055059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.055065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.055079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.064971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.060 [2024-12-16 12:59:09.065044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.060 [2024-12-16 12:59:09.065057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.060 [2024-12-16 12:59:09.065064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.060 [2024-12-16 12:59:09.065069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.060 [2024-12-16 12:59:09.065084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.060 qpair failed and we were unable to recover it. 00:37:43.060 [2024-12-16 12:59:09.075048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.061 [2024-12-16 12:59:09.075133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.061 [2024-12-16 12:59:09.075148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.061 [2024-12-16 12:59:09.075154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.061 [2024-12-16 12:59:09.075160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.061 [2024-12-16 12:59:09.075175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.061 qpair failed and we were unable to recover it. 00:37:43.061 [2024-12-16 12:59:09.085060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.061 [2024-12-16 12:59:09.085120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.061 [2024-12-16 12:59:09.085134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.061 [2024-12-16 12:59:09.085140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.061 [2024-12-16 12:59:09.085146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.061 [2024-12-16 12:59:09.085159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.061 qpair failed and we were unable to recover it. 00:37:43.061 [2024-12-16 12:59:09.095110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.061 [2024-12-16 12:59:09.095185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.061 [2024-12-16 12:59:09.095198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.061 [2024-12-16 12:59:09.095205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.061 [2024-12-16 12:59:09.095211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.061 [2024-12-16 12:59:09.095225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.061 qpair failed and we were unable to recover it. 00:37:43.061 [2024-12-16 12:59:09.105111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.061 [2024-12-16 12:59:09.105170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.061 [2024-12-16 12:59:09.105184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.061 [2024-12-16 12:59:09.105190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.061 [2024-12-16 12:59:09.105196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.061 [2024-12-16 12:59:09.105210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.061 qpair failed and we were unable to recover it. 00:37:43.061 [2024-12-16 12:59:09.115146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.061 [2024-12-16 12:59:09.115195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.061 [2024-12-16 12:59:09.115208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.061 [2024-12-16 12:59:09.115214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.061 [2024-12-16 12:59:09.115220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.061 [2024-12-16 12:59:09.115234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.061 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.125174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.125227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.125240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.125246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.125252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.125266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.135205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.135270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.135283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.135292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.135298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.135311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.145247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.145298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.145311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.145318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.145324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.145337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.155310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.155363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.155375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.155381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.155387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.155401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.165273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.165326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.165339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.165345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.165351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.165365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.175266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.175320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.175334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.175340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.175346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.175360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.185355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.321 [2024-12-16 12:59:09.185410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.321 [2024-12-16 12:59:09.185423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.321 [2024-12-16 12:59:09.185429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.321 [2024-12-16 12:59:09.185435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.321 [2024-12-16 12:59:09.185449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.321 qpair failed and we were unable to recover it. 00:37:43.321 [2024-12-16 12:59:09.195408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.195480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.195493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.195500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.195505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.195519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.205435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.205492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.205506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.205512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.205518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.205531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.215436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.215488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.215502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.215508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.215514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.215527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.225461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.225514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.225528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.225537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.225543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.225557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.235481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.235538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.235551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.235557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.235563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.235576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.245518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.245575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.245588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.245594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.245601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.245614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.255468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.255522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.255535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.255541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.255547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.255561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.265569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.265648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.265661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.265667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.265673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.265686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.275536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.275587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.275601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.275607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.275613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.275627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.285656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.285743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.285755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.285762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.285768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.285782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.295696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.295767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.295780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.295786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.295792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.295805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.305619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.305678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.305691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.305697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.305703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.305718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.322 [2024-12-16 12:59:09.315642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.322 [2024-12-16 12:59:09.315698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.322 [2024-12-16 12:59:09.315713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.322 [2024-12-16 12:59:09.315720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.322 [2024-12-16 12:59:09.315726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.322 [2024-12-16 12:59:09.315740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.322 qpair failed and we were unable to recover it. 00:37:43.323 [2024-12-16 12:59:09.325695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.323 [2024-12-16 12:59:09.325747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.323 [2024-12-16 12:59:09.325760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.323 [2024-12-16 12:59:09.325766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.323 [2024-12-16 12:59:09.325772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.323 [2024-12-16 12:59:09.325786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.323 qpair failed and we were unable to recover it. 00:37:43.323 [2024-12-16 12:59:09.335781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.323 [2024-12-16 12:59:09.335835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.323 [2024-12-16 12:59:09.335848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.323 [2024-12-16 12:59:09.335854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.323 [2024-12-16 12:59:09.335860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.323 [2024-12-16 12:59:09.335873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.323 qpair failed and we were unable to recover it. 00:37:43.323 [2024-12-16 12:59:09.345731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.323 [2024-12-16 12:59:09.345787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.323 [2024-12-16 12:59:09.345800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.323 [2024-12-16 12:59:09.345806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.323 [2024-12-16 12:59:09.345812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.323 [2024-12-16 12:59:09.345825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.323 qpair failed and we were unable to recover it. 00:37:43.323 [2024-12-16 12:59:09.355795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.323 [2024-12-16 12:59:09.355863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.323 [2024-12-16 12:59:09.355876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.323 [2024-12-16 12:59:09.355882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.323 [2024-12-16 12:59:09.355888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.323 [2024-12-16 12:59:09.355901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.323 qpair failed and we were unable to recover it. 00:37:43.323 [2024-12-16 12:59:09.365849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.323 [2024-12-16 12:59:09.365929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.323 [2024-12-16 12:59:09.365942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.323 [2024-12-16 12:59:09.365948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.323 [2024-12-16 12:59:09.365954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.323 [2024-12-16 12:59:09.365967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.323 qpair failed and we were unable to recover it. 00:37:43.323 [2024-12-16 12:59:09.375819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.323 [2024-12-16 12:59:09.375876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.323 [2024-12-16 12:59:09.375888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.323 [2024-12-16 12:59:09.375894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.323 [2024-12-16 12:59:09.375900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.323 [2024-12-16 12:59:09.375913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.323 qpair failed and we were unable to recover it. 00:37:43.583 [2024-12-16 12:59:09.385885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.583 [2024-12-16 12:59:09.385938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.583 [2024-12-16 12:59:09.385952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.583 [2024-12-16 12:59:09.385959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.583 [2024-12-16 12:59:09.385965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.583 [2024-12-16 12:59:09.385979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-12-16 12:59:09.395951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.583 [2024-12-16 12:59:09.396010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.583 [2024-12-16 12:59:09.396024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.583 [2024-12-16 12:59:09.396031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.583 [2024-12-16 12:59:09.396036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.583 [2024-12-16 12:59:09.396050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-12-16 12:59:09.406007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.583 [2024-12-16 12:59:09.406082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.583 [2024-12-16 12:59:09.406101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.583 [2024-12-16 12:59:09.406108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.583 [2024-12-16 12:59:09.406118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.583 [2024-12-16 12:59:09.406135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-12-16 12:59:09.415986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.583 [2024-12-16 12:59:09.416044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.583 [2024-12-16 12:59:09.416058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.583 [2024-12-16 12:59:09.416064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.583 [2024-12-16 12:59:09.416070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.583 [2024-12-16 12:59:09.416084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-12-16 12:59:09.426025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.426078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.426092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.426098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.426104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.426123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.435979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.436035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.436048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.436055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.436061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.436074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.446088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.446160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.446173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.446180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.446185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.446204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.456078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.456133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.456146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.456152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.456158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.456172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.466134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.466188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.466201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.466207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.466213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.466227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.476176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.476227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.476241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.476247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.476253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.476267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.486121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.486173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.486186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.486192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.486198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.486212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.496140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.496193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.496210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.496216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.496222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.496236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.506169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.506224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.506239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.506245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.506251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.506265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.516322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.516384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.516398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.516404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.516410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.516424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.526237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.526290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.526303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.526310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.526316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.526329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.536341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.584 [2024-12-16 12:59:09.536396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.584 [2024-12-16 12:59:09.536409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.584 [2024-12-16 12:59:09.536415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.584 [2024-12-16 12:59:09.536421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.584 [2024-12-16 12:59:09.536438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-12-16 12:59:09.546346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.546401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.546414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.546420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.546425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.546439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.556365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.556415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.556428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.556434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.556439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.556452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.566333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.566395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.566408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.566414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.566420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.566434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.576381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.576439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.576452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.576458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.576464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.576478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.586470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.586518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.586537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.586543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.586549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.586563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.596517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.596571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.596585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.596591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.596597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.596611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.606470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.606556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.606569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.606575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.606581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.606595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.616546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.616601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.616614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.616620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.616626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.616640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.626567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.626648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.626661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.626667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.626673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.626690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.636580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.636627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.636640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.636646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.636652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.636665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-12-16 12:59:09.646575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.585 [2024-12-16 12:59:09.646628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.585 [2024-12-16 12:59:09.646641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.585 [2024-12-16 12:59:09.646647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.585 [2024-12-16 12:59:09.646653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.585 [2024-12-16 12:59:09.646666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.846 [2024-12-16 12:59:09.656636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.846 [2024-12-16 12:59:09.656689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.846 [2024-12-16 12:59:09.656703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.846 [2024-12-16 12:59:09.656709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.846 [2024-12-16 12:59:09.656715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.846 [2024-12-16 12:59:09.656729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.846 qpair failed and we were unable to recover it. 00:37:43.846 [2024-12-16 12:59:09.666618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.846 [2024-12-16 12:59:09.666666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.846 [2024-12-16 12:59:09.666679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.846 [2024-12-16 12:59:09.666685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.846 [2024-12-16 12:59:09.666691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.846 [2024-12-16 12:59:09.666704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.846 qpair failed and we were unable to recover it. 00:37:43.846 [2024-12-16 12:59:09.676727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.846 [2024-12-16 12:59:09.676778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.846 [2024-12-16 12:59:09.676794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.846 [2024-12-16 12:59:09.676801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.846 [2024-12-16 12:59:09.676806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.846 [2024-12-16 12:59:09.676820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.846 qpair failed and we were unable to recover it. 00:37:43.846 [2024-12-16 12:59:09.686745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.846 [2024-12-16 12:59:09.686792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.846 [2024-12-16 12:59:09.686805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.686811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.686817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.686830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.696720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.696775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.696788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.696795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.696801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.696815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.706803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.706902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.706916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.706922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.706928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.706942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.716888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.716945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.716958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.716965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.716974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.716988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.726863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.726912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.726926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.726932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.726938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.726951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.736904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.736957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.736970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.736976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.736982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.736996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.747016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.747099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.747116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.747122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.747128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.747142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.757005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.757061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.757073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.757079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.757086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.757099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.766988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.767035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.767053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.767059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.767065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.767079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.777011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.777067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.777080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.777087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.777092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.777106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.787040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.787093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.787106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.787116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.787123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.787136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.797082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.797137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.797152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.797158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.797164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.797178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.807093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.807149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.807162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.807169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.807178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.807192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.847 [2024-12-16 12:59:09.817137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.847 [2024-12-16 12:59:09.817194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.847 [2024-12-16 12:59:09.817207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.847 [2024-12-16 12:59:09.817213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.847 [2024-12-16 12:59:09.817219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.847 [2024-12-16 12:59:09.817233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.847 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.827165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.827217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.827229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.827236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.827241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.827255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.837224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.837279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.837291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.837298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.837303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.837317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.847145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.847202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.847215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.847222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.847228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.847242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.857242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.857302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.857314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.857320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.857326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.857340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.867277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.867336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.867349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.867355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.867361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.867375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.877287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.877340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.877353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.877359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.877365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.877378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.887333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.887382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.887395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.887402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.887407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.887421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.897375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.897428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.897441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.897447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.897456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.897470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:43.848 [2024-12-16 12:59:09.907405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.848 [2024-12-16 12:59:09.907460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.848 [2024-12-16 12:59:09.907474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.848 [2024-12-16 12:59:09.907481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.848 [2024-12-16 12:59:09.907487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:43.848 [2024-12-16 12:59:09.907502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.848 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.917472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.917535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.917549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.917555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.917561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.109 [2024-12-16 12:59:09.917576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.109 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.927454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.927510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.927523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.927529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.927535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.109 [2024-12-16 12:59:09.927549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.109 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.937402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.937456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.937469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.937475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.937481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.109 [2024-12-16 12:59:09.937494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.109 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.947513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.947572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.947584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.947590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.947596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.109 [2024-12-16 12:59:09.947609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.109 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.957536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.957594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.957606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.957612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.957618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.109 [2024-12-16 12:59:09.957631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.109 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.967558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.967611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.967624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.967630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.967636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.109 [2024-12-16 12:59:09.967649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.109 qpair failed and we were unable to recover it. 00:37:44.109 [2024-12-16 12:59:09.977628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.109 [2024-12-16 12:59:09.977733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.109 [2024-12-16 12:59:09.977746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.109 [2024-12-16 12:59:09.977752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.109 [2024-12-16 12:59:09.977758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:09.977772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:09.987659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:09.987713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:09.987727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:09.987734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:09.987743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:09.987757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:09.997661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:09.997715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:09.997728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:09.997734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:09.997740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:09.997753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.007720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.007835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.007856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.007865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.007872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.007891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.017768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.017870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.017884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.017891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.017897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.017911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.027726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.027792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.027811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.027821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.027830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.027854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.037784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.037856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.037872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.037879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.037885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.037900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.047888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.047997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.048011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.048018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.048025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.048039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.057894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.057999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.058014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.058020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.058027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.058040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.067928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.067994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.068010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.068017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.068023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.068038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.077961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.078024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.078038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.078047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.078053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.078067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.087948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.088007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.088021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.088029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.088037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.088051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.098048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.098125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.098139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.098145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.098151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.098165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.108002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.108061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.108075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.108081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.110 [2024-12-16 12:59:10.108087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.110 [2024-12-16 12:59:10.108101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.110 qpair failed and we were unable to recover it. 00:37:44.110 [2024-12-16 12:59:10.118030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.110 [2024-12-16 12:59:10.118094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.110 [2024-12-16 12:59:10.118108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.110 [2024-12-16 12:59:10.118119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.111 [2024-12-16 12:59:10.118125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.111 [2024-12-16 12:59:10.118139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.111 qpair failed and we were unable to recover it. 00:37:44.111 [2024-12-16 12:59:10.128027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.111 [2024-12-16 12:59:10.128083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.111 [2024-12-16 12:59:10.128097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.111 [2024-12-16 12:59:10.128103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.111 [2024-12-16 12:59:10.128109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.111 [2024-12-16 12:59:10.128127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.111 qpair failed and we were unable to recover it. 00:37:44.111 [2024-12-16 12:59:10.138074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.111 [2024-12-16 12:59:10.138145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.111 [2024-12-16 12:59:10.138159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.111 [2024-12-16 12:59:10.138165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.111 [2024-12-16 12:59:10.138171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.111 [2024-12-16 12:59:10.138186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.111 qpair failed and we were unable to recover it. 00:37:44.111 [2024-12-16 12:59:10.148076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.111 [2024-12-16 12:59:10.148134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.111 [2024-12-16 12:59:10.148148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.111 [2024-12-16 12:59:10.148154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.111 [2024-12-16 12:59:10.148160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.111 [2024-12-16 12:59:10.148173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.111 qpair failed and we were unable to recover it. 00:37:44.111 [2024-12-16 12:59:10.158116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.111 [2024-12-16 12:59:10.158170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.111 [2024-12-16 12:59:10.158183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.111 [2024-12-16 12:59:10.158189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.111 [2024-12-16 12:59:10.158195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.111 [2024-12-16 12:59:10.158208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.111 qpair failed and we were unable to recover it. 00:37:44.111 [2024-12-16 12:59:10.168142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.111 [2024-12-16 12:59:10.168202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.111 [2024-12-16 12:59:10.168215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.111 [2024-12-16 12:59:10.168226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.111 [2024-12-16 12:59:10.168231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.111 [2024-12-16 12:59:10.168245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.111 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.178171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.178237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.178250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.178257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.178263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.178276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.188186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.188244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.188256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.188263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.188269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.188282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.198222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.198286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.198300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.198306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.198312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.198326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.208238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.208294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.208307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.208314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.208320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.208334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.218306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.218374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.218388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.218394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.218400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.218414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.228345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.228409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.228422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.228428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.228434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.228448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.238375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.238438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.238452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.238458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.238463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.238477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.248400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.248455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.248468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.248474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.248481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.248494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.258458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.258520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.258532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.258542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.258547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.258561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.268433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.268518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.268531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.268537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.268543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.268556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.372 [2024-12-16 12:59:10.278456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.372 [2024-12-16 12:59:10.278506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.372 [2024-12-16 12:59:10.278519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.372 [2024-12-16 12:59:10.278526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.372 [2024-12-16 12:59:10.278531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.372 [2024-12-16 12:59:10.278545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.372 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.288489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.288580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.288593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.288600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.288606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.288619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.298526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.298590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.298603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.298610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.298616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.298629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.308532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.308619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.308633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.308639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.308645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.308659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.318587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.318643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.318656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.318663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.318669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.318682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.328582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.328637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.328650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.328656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.328662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.328675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.338620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.338677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.338690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.338696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.338702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.338716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.348656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.348717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.348729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.348738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.348744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.348758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.358717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.358774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.358787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.358793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.358799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.358812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.368725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.368779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.368792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.368798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.368804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.368818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.378830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.378901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.378920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.378929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.378936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.378955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.388773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.388829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.388842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.388849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.388855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.388869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.398814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.398869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.398885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.398891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.398898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.398913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.408818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.408875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.408889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.408896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.408902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.373 [2024-12-16 12:59:10.408916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.373 qpair failed and we were unable to recover it. 00:37:44.373 [2024-12-16 12:59:10.418895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.373 [2024-12-16 12:59:10.418994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.373 [2024-12-16 12:59:10.419009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.373 [2024-12-16 12:59:10.419015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.373 [2024-12-16 12:59:10.419021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.374 [2024-12-16 12:59:10.419035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.374 qpair failed and we were unable to recover it. 00:37:44.374 [2024-12-16 12:59:10.428922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.374 [2024-12-16 12:59:10.428974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.374 [2024-12-16 12:59:10.428988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.374 [2024-12-16 12:59:10.428994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.374 [2024-12-16 12:59:10.429000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.374 [2024-12-16 12:59:10.429014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.374 qpair failed and we were unable to recover it. 00:37:44.634 [2024-12-16 12:59:10.438898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.634 [2024-12-16 12:59:10.438951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.634 [2024-12-16 12:59:10.438968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.634 [2024-12-16 12:59:10.438975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.634 [2024-12-16 12:59:10.438981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.634 [2024-12-16 12:59:10.438995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.634 qpair failed and we were unable to recover it. 00:37:44.634 [2024-12-16 12:59:10.448924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.634 [2024-12-16 12:59:10.448979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.634 [2024-12-16 12:59:10.448992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.634 [2024-12-16 12:59:10.448999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.634 [2024-12-16 12:59:10.449005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.634 [2024-12-16 12:59:10.449019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.634 qpair failed and we were unable to recover it. 00:37:44.634 [2024-12-16 12:59:10.459003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.459061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.459074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.459080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.459087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.459100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.468991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.469041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.469054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.469061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.469067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.469081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.479005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.479084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.479097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.479104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.479110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.479127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.489031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.489083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.489097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.489103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.489109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.489127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.499066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.499125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.499138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.499144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.499150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.499164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.509082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.509143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.509156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.509163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.509169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.509183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.519118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.519172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.519184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.519190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.519196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.519209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.529071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.529137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.529155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.529161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.529167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.529183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.539173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.539255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.539268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.539275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.539280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.539294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.549202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.549259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.549272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.549278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.549284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.549297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.559266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.559328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.559341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.559347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.559353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.559366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.569294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.569360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.569373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.569379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.569385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.569399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.579299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.579374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.579387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.579393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.579399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.635 [2024-12-16 12:59:10.579413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.635 qpair failed and we were unable to recover it. 00:37:44.635 [2024-12-16 12:59:10.589286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.635 [2024-12-16 12:59:10.589362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.635 [2024-12-16 12:59:10.589376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.635 [2024-12-16 12:59:10.589382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.635 [2024-12-16 12:59:10.589388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.589401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.599337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.599385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.599398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.599404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.599409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.599423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.609400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.609454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.609467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.609474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.609480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.609493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.619421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.619477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.619493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.619500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.619506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.619519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.629364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.629417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.629430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.629437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.629443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.629457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.639451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.639507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.639520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.639527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.639532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.639545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.649479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.649531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.649544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.649550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.649556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.649570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.659553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.659607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.659620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.659626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.659632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.659649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.669543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.669596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.669609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.669615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.669621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.669634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.679507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.679598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.679611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.679617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.679623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.679636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.636 [2024-12-16 12:59:10.689537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.636 [2024-12-16 12:59:10.689594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.636 [2024-12-16 12:59:10.689607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.636 [2024-12-16 12:59:10.689613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.636 [2024-12-16 12:59:10.689619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.636 [2024-12-16 12:59:10.689632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.636 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.699658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.699716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.699729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.699735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.699741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.699755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.709656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.709714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.709730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.709753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.709759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.709774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.719695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.719792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.719805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.719811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.719818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.719832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.729656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.729705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.729718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.729724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.729730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.729743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.739746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.739824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.739837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.739844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.739850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.739864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.749770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.749821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.749835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.749842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.749848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.749865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.759861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.759910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.759922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.759929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.759934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.759948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.769796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.769881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.769894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.769901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.769907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.769920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.779891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.779946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.779959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.779965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.897 [2024-12-16 12:59:10.779971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.897 [2024-12-16 12:59:10.779984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-16 12:59:10.789899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.897 [2024-12-16 12:59:10.789955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.897 [2024-12-16 12:59:10.789969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.897 [2024-12-16 12:59:10.789976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.789982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.789996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.799930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.799989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.800005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.800012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.800017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.800031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.809888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.809943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.809956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.809962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.809968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.809982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.820016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.820092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.820105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.820116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.820123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.820138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.830027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.830079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.830092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.830098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.830104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.830122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.840039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.840103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.840123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.840129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.840135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.840152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.850066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.850142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.850156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.850162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.850167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.850182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.860077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.860161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.860174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.860180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.860186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.860199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.870068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.870125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.870139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.870145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.870151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.870164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.880146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.880199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.880212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.880219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.880224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.880239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.890229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.890287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.890303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.890309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.890315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.890329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.900144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.900200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.900214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.900220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.900225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.900239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.910227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.910280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.910294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.910301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.910307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.910321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.920277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.920327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.898 [2024-12-16 12:59:10.920340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.898 [2024-12-16 12:59:10.920346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.898 [2024-12-16 12:59:10.920352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.898 [2024-12-16 12:59:10.920366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-16 12:59:10.930222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.898 [2024-12-16 12:59:10.930278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.899 [2024-12-16 12:59:10.930291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.899 [2024-12-16 12:59:10.930297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.899 [2024-12-16 12:59:10.930307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.899 [2024-12-16 12:59:10.930320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-16 12:59:10.940339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.899 [2024-12-16 12:59:10.940407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.899 [2024-12-16 12:59:10.940423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.899 [2024-12-16 12:59:10.940429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.899 [2024-12-16 12:59:10.940435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.899 [2024-12-16 12:59:10.940450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-16 12:59:10.950397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.899 [2024-12-16 12:59:10.950450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.899 [2024-12-16 12:59:10.950463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.899 [2024-12-16 12:59:10.950470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.899 [2024-12-16 12:59:10.950475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.899 [2024-12-16 12:59:10.950490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-16 12:59:10.960412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.899 [2024-12-16 12:59:10.960465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.899 [2024-12-16 12:59:10.960478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.899 [2024-12-16 12:59:10.960485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.899 [2024-12-16 12:59:10.960491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:44.899 [2024-12-16 12:59:10.960504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.899 qpair failed and we were unable to recover it. 00:37:45.159 [2024-12-16 12:59:10.970428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-12-16 12:59:10.970480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-12-16 12:59:10.970494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-12-16 12:59:10.970500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-12-16 12:59:10.970506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.159 [2024-12-16 12:59:10.970520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-12-16 12:59:10.980367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-12-16 12:59:10.980422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-12-16 12:59:10.980438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-12-16 12:59:10.980444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-12-16 12:59:10.980449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.159 [2024-12-16 12:59:10.980463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-12-16 12:59:10.990468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-12-16 12:59:10.990522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-12-16 12:59:10.990535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-12-16 12:59:10.990541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-12-16 12:59:10.990547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.159 [2024-12-16 12:59:10.990561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-12-16 12:59:11.000474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-12-16 12:59:11.000530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-12-16 12:59:11.000543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-12-16 12:59:11.000549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-12-16 12:59:11.000555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.159 [2024-12-16 12:59:11.000568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-12-16 12:59:11.010554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-12-16 12:59:11.010608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-12-16 12:59:11.010622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.010628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.010634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.010648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.020582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.020637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.020650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.020656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.020665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.020679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.030602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.030657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.030670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.030676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.030682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.030696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.040617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.040666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.040679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.040685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.040691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.040705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.050557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.050614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.050627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.050634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.050640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.050654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.060633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.060688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.060701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.060707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.060713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.060726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.070693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.070750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.070764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.070770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.070776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.070789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.080677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.080737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.080749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.080756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.080762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.080775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.090753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.090814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.090827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.090833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.090839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.090852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.100767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.100851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.100863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.100870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.100875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.100889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.110788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.110842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.110855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.110861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.110873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.110887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.120817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.120866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.120879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.120886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.120891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.120905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.130843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.130896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.130910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.130916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.130922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.130935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.140875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.140965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.160 [2024-12-16 12:59:11.140978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.160 [2024-12-16 12:59:11.140984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.160 [2024-12-16 12:59:11.140990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.160 [2024-12-16 12:59:11.141003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-12-16 12:59:11.150901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.160 [2024-12-16 12:59:11.150952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.150966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.150972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.150978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.150991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.160921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.161001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.161015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.161021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.161026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.161040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.170954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.171026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.171039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.171046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.171052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.171065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.180996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.181051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.181064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.181070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.181076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.181090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.191069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.191126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.191140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.191146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.191152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.191166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.201092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.201193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.201206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.201212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.201221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.201235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.211066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.211125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.211140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.211146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.211151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.211166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-12-16 12:59:11.221137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.161 [2024-12-16 12:59:11.221198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.161 [2024-12-16 12:59:11.221211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.161 [2024-12-16 12:59:11.221218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.161 [2024-12-16 12:59:11.221223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.161 [2024-12-16 12:59:11.221237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.421 [2024-12-16 12:59:11.231141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.421 [2024-12-16 12:59:11.231195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.421 [2024-12-16 12:59:11.231208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.421 [2024-12-16 12:59:11.231215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.421 [2024-12-16 12:59:11.231221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.421 [2024-12-16 12:59:11.231235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.421 qpair failed and we were unable to recover it. 00:37:45.421 [2024-12-16 12:59:11.241206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.421 [2024-12-16 12:59:11.241304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.421 [2024-12-16 12:59:11.241318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.421 [2024-12-16 12:59:11.241324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.421 [2024-12-16 12:59:11.241330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.421 [2024-12-16 12:59:11.241343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.421 qpair failed and we were unable to recover it. 00:37:45.421 [2024-12-16 12:59:11.251192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.421 [2024-12-16 12:59:11.251265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.421 [2024-12-16 12:59:11.251279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.421 [2024-12-16 12:59:11.251285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.421 [2024-12-16 12:59:11.251291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.421 [2024-12-16 12:59:11.251305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.421 qpair failed and we were unable to recover it. 00:37:45.421 [2024-12-16 12:59:11.261236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.421 [2024-12-16 12:59:11.261293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.421 [2024-12-16 12:59:11.261306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.421 [2024-12-16 12:59:11.261312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.261318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.261332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.271272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.271335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.271348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.271354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.271360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.271374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.281217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.281307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.281320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.281326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.281332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.281345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.291314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.291366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.291379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.291388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.291394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.291408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.301354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.301409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.301422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.301428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.301434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.301447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.311398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.311451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.311464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.311470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.311476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.311490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.321409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.321456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.321469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.321475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.321481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.321494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.331483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.331537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.331550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.331556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.331562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.331576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.341467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.341523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.341536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.341542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.341548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.341561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.351495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.351579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.351592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.351598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.351604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.351617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.361518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.361571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.361585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.361591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.361597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.361611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.371537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.371589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.371602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.371608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.371614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.371627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.381581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.381636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.381649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.381658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.381664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.381677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.391602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.391662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-12-16 12:59:11.391675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-12-16 12:59:11.391682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-12-16 12:59:11.391688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.422 [2024-12-16 12:59:11.391701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-12-16 12:59:11.401628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-12-16 12:59:11.401684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.401699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.401706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.401712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.401727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.411649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.411717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.411731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.411737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.411743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.411757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.421684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.421739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.421753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.421759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.421765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.421779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.431712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.431768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.431781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.431787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.431793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.431806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.441731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.441795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.441808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.441814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.441819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.441833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.451752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.451804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.451817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.451823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.451829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.451843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.461857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.461962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.461976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.461982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.461988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.462001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.471857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.471911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.471924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.471934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.471940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.471953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-12-16 12:59:11.481773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-12-16 12:59:11.481824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-12-16 12:59:11.481837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-12-16 12:59:11.481843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-12-16 12:59:11.481849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.423 [2024-12-16 12:59:11.481862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.491798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.491854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.491867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.491873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.491879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.491892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.501932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.501991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.502005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.502011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.502017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.502030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.511995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.512048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.512062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.512068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.512074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.512088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.521976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.522028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.522042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.522048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.522054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.522068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.532061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.532118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.532131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.532137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.532143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.532157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.542062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.542121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.542134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.542140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.542146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.542159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.552061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.552112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.552128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.552135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.552141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.552154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.562082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.562135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.562148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.562157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.562163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.562177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.572131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.572181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.572195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.572201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.572207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.572221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.582171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.582269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.582283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.582289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.582295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.582309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.592167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.592231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.592244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.592250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.592256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.592270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.602181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.602234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.602246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.602252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.602258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.602271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.684 qpair failed and we were unable to recover it. 00:37:45.684 [2024-12-16 12:59:11.612214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.684 [2024-12-16 12:59:11.612264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.684 [2024-12-16 12:59:11.612277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.684 [2024-12-16 12:59:11.612283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.684 [2024-12-16 12:59:11.612289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.684 [2024-12-16 12:59:11.612303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.622231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.622288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.622301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.622307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.622314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.622327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.632321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.632420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.632433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.632440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.632446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.632459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.642293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.642342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.642355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.642362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.642367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.642381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.652330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.652380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.652396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.652403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.652408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.652422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.662370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.662425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.662438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.662444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.662450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.662463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.672395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.672450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.672464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.672470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.672476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.672490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.682424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.682500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.682512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.682518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.682524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.682538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.692450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.692503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.692515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.692522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.692528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.692541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.702485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.702539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.702551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.702557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.702563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.702577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.712510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.712569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.712583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.712589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.712595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.712609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.722557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.722612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.722626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.722632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.722638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.722651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.732556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.732618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.732631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.732637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.732643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.732656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-12-16 12:59:11.742598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-12-16 12:59:11.742651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-12-16 12:59:11.742668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-12-16 12:59:11.742674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-12-16 12:59:11.742679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.685 [2024-12-16 12:59:11.742693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.752623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.752696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.752709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.752716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.752722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.752735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.762652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.762723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.762736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.762742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.762748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.762761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.772692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.772743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.772756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.772763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.772768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.772782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.782708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.782776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.782789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.782795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.782801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.782815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.792750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.792807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.792820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.792826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.792832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.792846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.802764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.802831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.802844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.802850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.802856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.802869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.812795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.812847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.812862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.812868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.812874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.812888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.822815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.822871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.822884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.822891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.822896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.822910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.832853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.832907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.832924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.832930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.946 [2024-12-16 12:59:11.832936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.946 [2024-12-16 12:59:11.832950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.946 qpair failed and we were unable to recover it. 00:37:45.946 [2024-12-16 12:59:11.842882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.946 [2024-12-16 12:59:11.842930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.946 [2024-12-16 12:59:11.842943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.946 [2024-12-16 12:59:11.842949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.842955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.842969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.852910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.852962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.852975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.852981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.852987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.853001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.862943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.862999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.863012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.863018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.863024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.863037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.872976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.873069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.873082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.873088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.873094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.873117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.883025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.883073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.883087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.883093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.883099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.883116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.893023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.893096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.893111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.893121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.893127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.893141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.903063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.903122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.903137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.903143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.903149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.903163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.913085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.913147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.913160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.913167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.913173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.913187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.923105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.923158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.923175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.923182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.923188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.923201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.933140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.933192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.933205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.933211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.933217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.933231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.943168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.943222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.943235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.943242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.943248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.943262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.953196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.953258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.953271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.953278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.953284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.953297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.963216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.963287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.963301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.963308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.963314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.963331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.973257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.973311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.947 [2024-12-16 12:59:11.973324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.947 [2024-12-16 12:59:11.973330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.947 [2024-12-16 12:59:11.973336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.947 [2024-12-16 12:59:11.973349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.947 qpair failed and we were unable to recover it. 00:37:45.947 [2024-12-16 12:59:11.983334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.947 [2024-12-16 12:59:11.983431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-12-16 12:59:11.983445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-12-16 12:59:11.983451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-12-16 12:59:11.983456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.948 [2024-12-16 12:59:11.983470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.948 qpair failed and we were unable to recover it. 00:37:45.948 [2024-12-16 12:59:11.993370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.948 [2024-12-16 12:59:11.993473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-12-16 12:59:11.993486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-12-16 12:59:11.993492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-12-16 12:59:11.993499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.948 [2024-12-16 12:59:11.993513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.948 qpair failed and we were unable to recover it. 00:37:45.948 [2024-12-16 12:59:12.003344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.948 [2024-12-16 12:59:12.003399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-12-16 12:59:12.003413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-12-16 12:59:12.003419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-12-16 12:59:12.003425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:45.948 [2024-12-16 12:59:12.003439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.948 qpair failed and we were unable to recover it. 00:37:46.208 [2024-12-16 12:59:12.013407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.208 [2024-12-16 12:59:12.013460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.208 [2024-12-16 12:59:12.013476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.208 [2024-12-16 12:59:12.013482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.013488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.013502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.023377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.023437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.023450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.023457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.023462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.023476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.033476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.033530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.033542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.033548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.033555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.033568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.043449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.043503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.043515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.043521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.043526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.043539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.053520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.053571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.053583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.053589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.053595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.053611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.063508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.063565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.063577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.063583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.063589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.063602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.073536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.073591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.073604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.073610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.073616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.073629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.083580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.083634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.083647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.083654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.083660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.083673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.093595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.093648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.093662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.093668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.093674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.093688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.103664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.103720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.103736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.103743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.103750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.103763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.113603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.113658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.113671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.113678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.113684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.113698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.123725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.123779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.123794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.123801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.123808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.123824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.133639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.133692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.133706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.133713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.133718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.133732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.143687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.209 [2024-12-16 12:59:12.143743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.209 [2024-12-16 12:59:12.143758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.209 [2024-12-16 12:59:12.143764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.209 [2024-12-16 12:59:12.143774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.209 [2024-12-16 12:59:12.143788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-12-16 12:59:12.153752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.153805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.153818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.153825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.153830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.153844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.163840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.163901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.163915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.163922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.163928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.163941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.173854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.173914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.173927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.173934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.173939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.173953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.183856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.183910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.183923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.183929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.183935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.183949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.193920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.193977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.193991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.193997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.194003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.194017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.203841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.203897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.203910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.203917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.203923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.203937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.213932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.213984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.213997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.214003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.214009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.214022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.224000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.224057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.224071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.224078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.224083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.224097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.233989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.234041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.234054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.234060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.234069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.234083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.244031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.244091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.244105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.244111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.244122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.244136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.254061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.254133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.254146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.254152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.254158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.254172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-12-16 12:59:12.264088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.210 [2024-12-16 12:59:12.264156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.210 [2024-12-16 12:59:12.264169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.210 [2024-12-16 12:59:12.264175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.210 [2024-12-16 12:59:12.264181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.210 [2024-12-16 12:59:12.264194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.471 [2024-12-16 12:59:12.274111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.471 [2024-12-16 12:59:12.274170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.471 [2024-12-16 12:59:12.274183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.471 [2024-12-16 12:59:12.274189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.471 [2024-12-16 12:59:12.274195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.471 [2024-12-16 12:59:12.274209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.471 qpair failed and we were unable to recover it. 00:37:46.471 [2024-12-16 12:59:12.284206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.471 [2024-12-16 12:59:12.284262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.471 [2024-12-16 12:59:12.284275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.471 [2024-12-16 12:59:12.284281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.471 [2024-12-16 12:59:12.284287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.471 [2024-12-16 12:59:12.284301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.471 qpair failed and we were unable to recover it. 00:37:46.471 [2024-12-16 12:59:12.294108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.471 [2024-12-16 12:59:12.294164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.471 [2024-12-16 12:59:12.294178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.471 [2024-12-16 12:59:12.294184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.471 [2024-12-16 12:59:12.294190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.471 [2024-12-16 12:59:12.294204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.471 qpair failed and we were unable to recover it. 00:37:46.471 [2024-12-16 12:59:12.304201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.471 [2024-12-16 12:59:12.304254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.471 [2024-12-16 12:59:12.304267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.471 [2024-12-16 12:59:12.304274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.304280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.304294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.314234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.314288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.314301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.314307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.314313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.314327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.324250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.324303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.324316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.324322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.324331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.324345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.334215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.334282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.334294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.334300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.334306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.334320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.344328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.344411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.344424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.344430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.344436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.344449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.354361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.354419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.354432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.354439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.354444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.354458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.364392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.364449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.364461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.364468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.364474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.364487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.374395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.374474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.374488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.374494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.374500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.374513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.384377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.384435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.384448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.384454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.384460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.384473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.394398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.394459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.394472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.394478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.394484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.394498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.404425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.404473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.404487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.404493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.404499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.404513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.414451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.414507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.414520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.414526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.414535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.414548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.424557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.424615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.424629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.424635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.424641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.424654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.434586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.434639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.434651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.472 [2024-12-16 12:59:12.434657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.472 [2024-12-16 12:59:12.434663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.472 [2024-12-16 12:59:12.434676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.472 qpair failed and we were unable to recover it. 00:37:46.472 [2024-12-16 12:59:12.444623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.472 [2024-12-16 12:59:12.444684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.472 [2024-12-16 12:59:12.444697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.444703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.444709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.444724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.454622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.454673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.454687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.454693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.454699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.454712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.464706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.464767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.464780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.464787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.464793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.464806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.474689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.474747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.474761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.474767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.474773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.474787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.484717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.484774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.484787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.484793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.484798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.484812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.494692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.494740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.494753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.494759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.494765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.494778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.504720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.504778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.504794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.504805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.504811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.504826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.514840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.514891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.514905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.514911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.514917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.514931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.524764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.524818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.524832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.524838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.524844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.524858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.473 [2024-12-16 12:59:12.534909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.473 [2024-12-16 12:59:12.534959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.473 [2024-12-16 12:59:12.534972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.473 [2024-12-16 12:59:12.534978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.473 [2024-12-16 12:59:12.534984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.473 [2024-12-16 12:59:12.534998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.473 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.544887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.544976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.544989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.544997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.734 [2024-12-16 12:59:12.545004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.734 [2024-12-16 12:59:12.545017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.734 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.554913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.555002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.555015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.555021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.734 [2024-12-16 12:59:12.555027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.734 [2024-12-16 12:59:12.555041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.734 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.564931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.564984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.564998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.565004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.734 [2024-12-16 12:59:12.565010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.734 [2024-12-16 12:59:12.565024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.734 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.574945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.575009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.575022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.575028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.734 [2024-12-16 12:59:12.575034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.734 [2024-12-16 12:59:12.575048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.734 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.584997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.585055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.585068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.585074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.734 [2024-12-16 12:59:12.585080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.734 [2024-12-16 12:59:12.585094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.734 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.594997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.595065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.595078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.595088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.734 [2024-12-16 12:59:12.595094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.734 [2024-12-16 12:59:12.595108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.734 qpair failed and we were unable to recover it. 00:37:46.734 [2024-12-16 12:59:12.605049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.734 [2024-12-16 12:59:12.605109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.734 [2024-12-16 12:59:12.605126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.734 [2024-12-16 12:59:12.605132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.605138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.605152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.615065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.615141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.615154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.615160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.615166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.615180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.625121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.625179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.625192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.625198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.625204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.625218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.635144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.635217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.635230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.635237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.635242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.635256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.645156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.645215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.645228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.645235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.645240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.645254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.655255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.655310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.655323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.655329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.655335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.655348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.665299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.665400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.665413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.665419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.665425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.665438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.675292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.675353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.675366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.675372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.675378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.675392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.685263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.685318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.685332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.685341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.685347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.685360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.695349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.695403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.695415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.695421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.695427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.695440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.705327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.705381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.705395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.705401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.705407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.705421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.715364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.715419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.715433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.715439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.715446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.715459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.725434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.725490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.725503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.725510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.725516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.725529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.735404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.735455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.735468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.735 [2024-12-16 12:59:12.735474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.735 [2024-12-16 12:59:12.735480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.735 [2024-12-16 12:59:12.735493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.735 qpair failed and we were unable to recover it. 00:37:46.735 [2024-12-16 12:59:12.745456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.735 [2024-12-16 12:59:12.745511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.735 [2024-12-16 12:59:12.745524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.736 [2024-12-16 12:59:12.745530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.736 [2024-12-16 12:59:12.745536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.736 [2024-12-16 12:59:12.745549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.736 qpair failed and we were unable to recover it. 00:37:46.736 [2024-12-16 12:59:12.755495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.736 [2024-12-16 12:59:12.755584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.736 [2024-12-16 12:59:12.755597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.736 [2024-12-16 12:59:12.755603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.736 [2024-12-16 12:59:12.755609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.736 [2024-12-16 12:59:12.755622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.736 qpair failed and we were unable to recover it. 00:37:46.736 [2024-12-16 12:59:12.765495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.736 [2024-12-16 12:59:12.765574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.736 [2024-12-16 12:59:12.765587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.736 [2024-12-16 12:59:12.765594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.736 [2024-12-16 12:59:12.765599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.736 [2024-12-16 12:59:12.765613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.736 qpair failed and we were unable to recover it. 00:37:46.736 [2024-12-16 12:59:12.775534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.736 [2024-12-16 12:59:12.775590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.736 [2024-12-16 12:59:12.775603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.736 [2024-12-16 12:59:12.775612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.736 [2024-12-16 12:59:12.775618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.736 [2024-12-16 12:59:12.775632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.736 qpair failed and we were unable to recover it. 00:37:46.736 [2024-12-16 12:59:12.785549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.736 [2024-12-16 12:59:12.785605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.736 [2024-12-16 12:59:12.785617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.736 [2024-12-16 12:59:12.785624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.736 [2024-12-16 12:59:12.785629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.736 [2024-12-16 12:59:12.785643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.736 qpair failed and we were unable to recover it. 00:37:46.736 [2024-12-16 12:59:12.795587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.736 [2024-12-16 12:59:12.795639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.736 [2024-12-16 12:59:12.795652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.736 [2024-12-16 12:59:12.795659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.736 [2024-12-16 12:59:12.795664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.736 [2024-12-16 12:59:12.795678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.736 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.805615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.805667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.805680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.805686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.805693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.805706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.997 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.815632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.815686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.815700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.815707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.815713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.815727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.997 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.825661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.825719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.825733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.825739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.825745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.825759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.997 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.835707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.835765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.835779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.835786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.835792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.835805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.997 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.845705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.845754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.845767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.845773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.845779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.845793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.997 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.855760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.855856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.855868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.855874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.855881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.855894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.997 qpair failed and we were unable to recover it. 00:37:46.997 [2024-12-16 12:59:12.865777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.997 [2024-12-16 12:59:12.865832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.997 [2024-12-16 12:59:12.865848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.997 [2024-12-16 12:59:12.865854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.997 [2024-12-16 12:59:12.865860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.997 [2024-12-16 12:59:12.865873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.875802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.875856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.875869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.875875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.875881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.875894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.885753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.885805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.885818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.885824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.885830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.885843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.895865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.895914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.895927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.895933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.895939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.895952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.905908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.905977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.905991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.905997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.906004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.906018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.915843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.915896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.915911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.915917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.915923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.915937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.925956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.926031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.926045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.926051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.926058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.926071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.935967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.936021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.936035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.936042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.936048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.936061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.945991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.946045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.946058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.946065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.946070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.946084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.956014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.956068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.956084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.956091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.956096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.956110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.966053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.966135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.966148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.966154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.966160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.966174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.976078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.976134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.976147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.976153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.976159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.976173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.986106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.986173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.986186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.986192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.986198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.986212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:12.996191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:12.996251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:12.996266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.998 [2024-12-16 12:59:12.996273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.998 [2024-12-16 12:59:12.996278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.998 [2024-12-16 12:59:12.996293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.998 qpair failed and we were unable to recover it. 00:37:46.998 [2024-12-16 12:59:13.006211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.998 [2024-12-16 12:59:13.006260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.998 [2024-12-16 12:59:13.006274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.999 [2024-12-16 12:59:13.006281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.999 [2024-12-16 12:59:13.006287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.999 [2024-12-16 12:59:13.006301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.999 qpair failed and we were unable to recover it. 00:37:46.999 [2024-12-16 12:59:13.016252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.999 [2024-12-16 12:59:13.016306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.999 [2024-12-16 12:59:13.016320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.999 [2024-12-16 12:59:13.016326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.999 [2024-12-16 12:59:13.016332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.999 [2024-12-16 12:59:13.016346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.999 qpair failed and we were unable to recover it. 00:37:46.999 [2024-12-16 12:59:13.026262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.999 [2024-12-16 12:59:13.026319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.999 [2024-12-16 12:59:13.026332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.999 [2024-12-16 12:59:13.026339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.999 [2024-12-16 12:59:13.026344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.999 [2024-12-16 12:59:13.026358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.999 qpair failed and we were unable to recover it. 00:37:46.999 [2024-12-16 12:59:13.036258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.999 [2024-12-16 12:59:13.036313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.999 [2024-12-16 12:59:13.036326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.999 [2024-12-16 12:59:13.036332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.999 [2024-12-16 12:59:13.036338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.999 [2024-12-16 12:59:13.036351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.999 qpair failed and we were unable to recover it. 00:37:46.999 [2024-12-16 12:59:13.046324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.999 [2024-12-16 12:59:13.046386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.999 [2024-12-16 12:59:13.046402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.999 [2024-12-16 12:59:13.046409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.999 [2024-12-16 12:59:13.046414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.999 [2024-12-16 12:59:13.046428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.999 qpair failed and we were unable to recover it. 00:37:46.999 [2024-12-16 12:59:13.056297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.999 [2024-12-16 12:59:13.056349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.999 [2024-12-16 12:59:13.056362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.999 [2024-12-16 12:59:13.056368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.999 [2024-12-16 12:59:13.056374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:46.999 [2024-12-16 12:59:13.056388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.999 qpair failed and we were unable to recover it. 00:37:47.259 [2024-12-16 12:59:13.066329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.259 [2024-12-16 12:59:13.066384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.259 [2024-12-16 12:59:13.066397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.259 [2024-12-16 12:59:13.066403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.259 [2024-12-16 12:59:13.066410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:47.259 [2024-12-16 12:59:13.066423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:47.259 qpair failed and we were unable to recover it. 00:37:47.259 [2024-12-16 12:59:13.076365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.259 [2024-12-16 12:59:13.076417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.259 [2024-12-16 12:59:13.076430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.259 [2024-12-16 12:59:13.076436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.259 [2024-12-16 12:59:13.076442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:47.259 [2024-12-16 12:59:13.076455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:47.259 qpair failed and we were unable to recover it. 00:37:47.259 [2024-12-16 12:59:13.086423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.259 [2024-12-16 12:59:13.086480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.259 [2024-12-16 12:59:13.086494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.259 [2024-12-16 12:59:13.086500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.259 [2024-12-16 12:59:13.086506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:47.259 [2024-12-16 12:59:13.086522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:47.259 qpair failed and we were unable to recover it. 00:37:47.259 [2024-12-16 12:59:13.096445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.259 [2024-12-16 12:59:13.096507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.259 [2024-12-16 12:59:13.096521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.259 [2024-12-16 12:59:13.096527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.259 [2024-12-16 12:59:13.096533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:47.259 [2024-12-16 12:59:13.096546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:47.259 qpair failed and we were unable to recover it. 00:37:47.259 [2024-12-16 12:59:13.106375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.106430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.106444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.106450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.106456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1588110 00:37:47.260 [2024-12-16 12:59:13.106471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 [2024-12-16 12:59:13.116499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.116619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.116684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.116722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.116756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84e4000b90 00:37:47.260 [2024-12-16 12:59:13.116828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 [2024-12-16 12:59:13.126440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.126520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.126555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.126576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.126597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84e4000b90 00:37:47.260 [2024-12-16 12:59:13.126639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 [2024-12-16 12:59:13.136502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.136589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.136654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.136679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.136699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84ec000b90 00:37:47.260 [2024-12-16 12:59:13.136750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 [2024-12-16 12:59:13.146508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.146581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.146610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.146624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.146637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84ec000b90 00:37:47.260 [2024-12-16 12:59:13.146668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 [2024-12-16 12:59:13.146823] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:47.260 A controller has encountered a failure and is being reset. 00:37:47.260 [2024-12-16 12:59:13.156576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.156669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.156726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.156750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.156770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84e0000b90 00:37:47.260 [2024-12-16 12:59:13.156821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 [2024-12-16 12:59:13.166687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.260 [2024-12-16 12:59:13.166757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.260 [2024-12-16 12:59:13.166786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.260 [2024-12-16 12:59:13.166801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.260 [2024-12-16 12:59:13.166815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84e0000b90 00:37:47.260 [2024-12-16 12:59:13.166846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:47.260 qpair failed and we were unable to recover it. 00:37:47.260 Controller properly reset. 00:37:47.260 Initializing NVMe Controllers 00:37:47.260 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:47.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:47.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:47.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:47.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:47.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:47.260 Initialization complete. Launching workers. 00:37:47.260 Starting thread on core 1 00:37:47.260 Starting thread on core 2 00:37:47.260 Starting thread on core 3 00:37:47.260 Starting thread on core 0 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:47.260 00:37:47.260 real 0m10.769s 00:37:47.260 user 0m19.412s 00:37:47.260 sys 0m4.597s 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.260 ************************************ 00:37:47.260 END TEST nvmf_target_disconnect_tc2 00:37:47.260 ************************************ 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.260 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.260 rmmod nvme_tcp 00:37:47.260 rmmod nvme_fabrics 00:37:47.260 rmmod nvme_keyring 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 585631 ']' 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 585631 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 585631 ']' 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 585631 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 585631 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 585631' 00:37:47.519 killing process with pid 585631 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 585631 00:37:47.519 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 585631 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.780 12:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.687 12:59:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.687 00:37:49.687 real 0m19.466s 00:37:49.687 user 0m47.030s 00:37:49.687 sys 0m9.496s 00:37:49.687 12:59:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:49.687 12:59:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:49.687 ************************************ 00:37:49.687 END TEST nvmf_target_disconnect 00:37:49.687 ************************************ 00:37:49.687 12:59:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:49.687 00:37:49.687 real 7m25.334s 00:37:49.687 user 16m53.379s 00:37:49.687 sys 2m8.842s 00:37:49.687 12:59:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:49.687 12:59:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.687 ************************************ 00:37:49.687 END TEST nvmf_host 00:37:49.687 ************************************ 00:37:49.687 12:59:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:49.687 12:59:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:49.687 12:59:15 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:49.687 12:59:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:49.687 12:59:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.687 12:59:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:49.687 ************************************ 00:37:49.687 START TEST nvmf_target_core_interrupt_mode 00:37:49.687 ************************************ 00:37:49.687 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:49.946 * Looking for test storage... 00:37:49.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.946 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.947 --rc genhtml_branch_coverage=1 00:37:49.947 --rc genhtml_function_coverage=1 00:37:49.947 --rc genhtml_legend=1 00:37:49.947 --rc geninfo_all_blocks=1 00:37:49.947 --rc geninfo_unexecuted_blocks=1 00:37:49.947 00:37:49.947 ' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.947 --rc genhtml_branch_coverage=1 00:37:49.947 --rc genhtml_function_coverage=1 00:37:49.947 --rc genhtml_legend=1 00:37:49.947 --rc geninfo_all_blocks=1 00:37:49.947 --rc geninfo_unexecuted_blocks=1 00:37:49.947 00:37:49.947 ' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.947 --rc genhtml_branch_coverage=1 00:37:49.947 --rc genhtml_function_coverage=1 00:37:49.947 --rc genhtml_legend=1 00:37:49.947 --rc geninfo_all_blocks=1 00:37:49.947 --rc geninfo_unexecuted_blocks=1 00:37:49.947 00:37:49.947 ' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.947 --rc genhtml_branch_coverage=1 00:37:49.947 --rc genhtml_function_coverage=1 00:37:49.947 --rc genhtml_legend=1 00:37:49.947 --rc geninfo_all_blocks=1 00:37:49.947 --rc geninfo_unexecuted_blocks=1 00:37:49.947 00:37:49.947 ' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.947 ************************************ 00:37:49.947 START TEST nvmf_abort 00:37:49.947 ************************************ 00:37:49.947 12:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:50.207 * Looking for test storage... 00:37:50.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.207 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.207 --rc genhtml_branch_coverage=1 00:37:50.207 --rc genhtml_function_coverage=1 00:37:50.207 --rc genhtml_legend=1 00:37:50.207 --rc geninfo_all_blocks=1 00:37:50.207 --rc geninfo_unexecuted_blocks=1 00:37:50.208 00:37:50.208 ' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:50.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.208 --rc genhtml_branch_coverage=1 00:37:50.208 --rc genhtml_function_coverage=1 00:37:50.208 --rc genhtml_legend=1 00:37:50.208 --rc geninfo_all_blocks=1 00:37:50.208 --rc geninfo_unexecuted_blocks=1 00:37:50.208 00:37:50.208 ' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:50.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.208 --rc genhtml_branch_coverage=1 00:37:50.208 --rc genhtml_function_coverage=1 00:37:50.208 --rc genhtml_legend=1 00:37:50.208 --rc geninfo_all_blocks=1 00:37:50.208 --rc geninfo_unexecuted_blocks=1 00:37:50.208 00:37:50.208 ' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:50.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.208 --rc genhtml_branch_coverage=1 00:37:50.208 --rc genhtml_function_coverage=1 00:37:50.208 --rc genhtml_legend=1 00:37:50.208 --rc geninfo_all_blocks=1 00:37:50.208 --rc geninfo_unexecuted_blocks=1 00:37:50.208 00:37:50.208 ' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.208 12:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:56.781 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:56.781 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:56.781 Found net devices under 0000:af:00.0: cvl_0_0 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:56.781 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:56.782 Found net devices under 0000:af:00.1: cvl_0_1 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:37:56.782 00:37:56.782 --- 10.0.0.2 ping statistics --- 00:37:56.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.782 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:37:56.782 00:37:56.782 --- 10.0.0.1 ping statistics --- 00:37:56.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.782 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=590076 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 590076 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 590076 ']' 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:56.782 12:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.782 [2024-12-16 12:59:22.039087] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.782 [2024-12-16 12:59:22.040022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:56.782 [2024-12-16 12:59:22.040059] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.782 [2024-12-16 12:59:22.111676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:56.782 [2024-12-16 12:59:22.151087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.782 [2024-12-16 12:59:22.151129] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.782 [2024-12-16 12:59:22.151136] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.782 [2024-12-16 12:59:22.151141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.782 [2024-12-16 12:59:22.151146] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.782 [2024-12-16 12:59:22.151274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:56.782 [2024-12-16 12:59:22.151318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.782 [2024-12-16 12:59:22.151319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:56.782 [2024-12-16 12:59:22.221924] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:56.782 [2024-12-16 12:59:22.222033] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:56.782 [2024-12-16 12:59:22.222491] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:56.782 [2024-12-16 12:59:22.222684] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.782 [2024-12-16 12:59:22.288191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.782 Malloc0 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.782 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.783 Delay0 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.783 [2024-12-16 12:59:22.364147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.783 12:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:56.783 [2024-12-16 12:59:22.520215] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:58.688 Initializing NVMe Controllers 00:37:58.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:58.688 controller IO queue size 128 less than required 00:37:58.688 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:58.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:58.688 Initialization complete. Launching workers. 00:37:58.688 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38144 00:37:58.688 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38201, failed to submit 66 00:37:58.688 success 38144, unsuccessful 57, failed 0 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:58.688 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:58.688 rmmod nvme_tcp 00:37:58.688 rmmod nvme_fabrics 00:37:58.947 rmmod nvme_keyring 00:37:58.947 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 590076 ']' 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 590076 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 590076 ']' 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 590076 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 590076 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 590076' 00:37:58.948 killing process with pid 590076 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 590076 00:37:58.948 12:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 590076 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.207 12:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:01.113 00:38:01.113 real 0m11.163s 00:38:01.113 user 0m10.748s 00:38:01.113 sys 0m5.752s 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.113 ************************************ 00:38:01.113 END TEST nvmf_abort 00:38:01.113 ************************************ 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:01.113 ************************************ 00:38:01.113 START TEST nvmf_ns_hotplug_stress 00:38:01.113 ************************************ 00:38:01.113 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:01.374 * Looking for test storage... 00:38:01.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.374 --rc genhtml_branch_coverage=1 00:38:01.374 --rc genhtml_function_coverage=1 00:38:01.374 --rc genhtml_legend=1 00:38:01.374 --rc geninfo_all_blocks=1 00:38:01.374 --rc geninfo_unexecuted_blocks=1 00:38:01.374 00:38:01.374 ' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.374 --rc genhtml_branch_coverage=1 00:38:01.374 --rc genhtml_function_coverage=1 00:38:01.374 --rc genhtml_legend=1 00:38:01.374 --rc geninfo_all_blocks=1 00:38:01.374 --rc geninfo_unexecuted_blocks=1 00:38:01.374 00:38:01.374 ' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.374 --rc genhtml_branch_coverage=1 00:38:01.374 --rc genhtml_function_coverage=1 00:38:01.374 --rc genhtml_legend=1 00:38:01.374 --rc geninfo_all_blocks=1 00:38:01.374 --rc geninfo_unexecuted_blocks=1 00:38:01.374 00:38:01.374 ' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.374 --rc genhtml_branch_coverage=1 00:38:01.374 --rc genhtml_function_coverage=1 00:38:01.374 --rc genhtml_legend=1 00:38:01.374 --rc geninfo_all_blocks=1 00:38:01.374 --rc geninfo_unexecuted_blocks=1 00:38:01.374 00:38:01.374 ' 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.374 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:01.375 12:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:07.950 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:07.950 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:07.950 Found net devices under 0000:af:00.0: cvl_0_0 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.950 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:07.951 Found net devices under 0000:af:00.1: cvl_0_1 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.951 12:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:07.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:07.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:38:07.951 00:38:07.951 --- 10.0.0.2 ping statistics --- 00:38:07.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.951 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:07.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:07.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:38:07.951 00:38:07.951 --- 10.0.0.1 ping statistics --- 00:38:07.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.951 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=593996 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 593996 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 593996 ']' 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:07.951 [2024-12-16 12:59:33.261497] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:07.951 [2024-12-16 12:59:33.262409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:07.951 [2024-12-16 12:59:33.262441] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.951 [2024-12-16 12:59:33.335150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:07.951 [2024-12-16 12:59:33.374313] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.951 [2024-12-16 12:59:33.374350] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.951 [2024-12-16 12:59:33.374357] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.951 [2024-12-16 12:59:33.374363] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.951 [2024-12-16 12:59:33.374368] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.951 [2024-12-16 12:59:33.374483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:07.951 [2024-12-16 12:59:33.374526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.951 [2024-12-16 12:59:33.374527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:07.951 [2024-12-16 12:59:33.443893] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:07.951 [2024-12-16 12:59:33.444014] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:07.951 [2024-12-16 12:59:33.444322] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:07.951 [2024-12-16 12:59:33.444637] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:07.951 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:07.952 [2024-12-16 12:59:33.675355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.952 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:07.952 12:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:08.211 [2024-12-16 12:59:34.063903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.211 12:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.470 12:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:08.470 Malloc0 00:38:08.470 12:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:08.729 Delay0 00:38:08.729 12:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.988 12:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:08.988 NULL1 00:38:09.249 12:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:09.249 12:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=594253 00:38:09.249 12:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:09.249 12:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.249 12:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:10.626 Read completed with error (sct=0, sc=11) 00:38:10.626 12:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.626 12:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:10.626 12:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:10.885 true 00:38:10.885 12:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:10.885 12:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.824 12:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.824 12:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:11.824 12:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:12.084 true 00:38:12.084 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:12.084 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.342 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.601 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:12.601 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:12.601 true 00:38:12.601 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:12.860 12:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.795 12:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.053 12:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:14.053 12:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:14.311 true 00:38:14.311 12:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:14.311 12:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.247 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.247 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:15.247 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:15.506 true 00:38:15.506 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:15.506 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.765 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.765 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:15.765 12:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:16.024 true 00:38:16.024 12:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:16.024 12:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.403 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.403 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:17.403 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:17.662 true 00:38:17.662 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:17.662 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.921 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.921 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:17.921 12:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:18.180 true 00:38:18.180 12:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:18.180 12:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.557 12:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.557 12:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:19.557 12:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:19.816 true 00:38:19.816 12:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:19.816 12:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.753 12:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.753 12:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:20.753 12:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:21.011 true 00:38:21.011 12:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:21.011 12:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.012 12:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.271 12:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:21.271 12:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:21.529 true 00:38:21.529 12:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:21.529 12:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.907 12:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.907 12:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:22.907 12:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:23.166 true 00:38:23.166 12:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:23.166 12:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.102 12:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.102 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:24.102 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:24.361 true 00:38:24.361 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:24.361 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.620 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.879 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:24.879 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:24.879 true 00:38:24.879 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:24.879 12:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.257 12:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.257 12:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:26.257 12:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:26.515 true 00:38:26.516 12:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:26.516 12:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.452 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.452 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:27.452 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:27.711 true 00:38:27.711 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:27.711 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.969 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.969 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:27.969 12:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:28.228 true 00:38:28.228 12:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:28.228 12:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.421 12:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.421 12:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:29.421 12:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:29.678 true 00:38:29.678 12:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:29.678 12:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.937 12:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.196 12:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:30.196 12:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:30.196 true 00:38:30.196 12:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:30.196 12:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:31.573 12:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:31.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:31.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:31.573 12:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:31.573 12:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:31.832 true 00:38:31.832 12:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:31.832 12:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.832 12:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.091 12:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:32.091 12:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:32.350 true 00:38:32.350 12:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:32.350 12:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.728 12:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.728 12:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:33.728 12:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:33.987 true 00:38:33.987 12:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:33.987 12:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.921 13:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.921 13:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:34.921 13:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:35.179 true 00:38:35.179 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:35.179 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.438 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.438 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:35.438 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:35.697 true 00:38:35.697 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:35.697 13:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.633 13:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.890 13:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:36.890 13:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:37.148 true 00:38:37.148 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:37.148 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.406 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.665 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:37.665 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:37.665 true 00:38:37.665 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:37.665 13:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:38.861 13:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.861 13:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:38.861 13:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:39.120 true 00:38:39.120 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:39.120 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.413 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.413 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:39.413 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:39.686 Initializing NVMe Controllers 00:38:39.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:39.687 Controller IO queue size 128, less than required. 00:38:39.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:39.687 Controller IO queue size 128, less than required. 00:38:39.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:39.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:39.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:39.687 Initialization complete. Launching workers. 00:38:39.687 ======================================================== 00:38:39.687 Latency(us) 00:38:39.687 Device Information : IOPS MiB/s Average min max 00:38:39.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1674.03 0.82 49289.81 1895.81 1036359.48 00:38:39.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17204.10 8.40 7440.25 1026.51 371022.51 00:38:39.687 ======================================================== 00:38:39.687 Total : 18878.12 9.22 11151.28 1026.51 1036359.48 00:38:39.687 00:38:39.687 true 00:38:39.687 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 594253 00:38:39.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (594253) - No such process 00:38:39.687 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 594253 00:38:39.687 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.980 13:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.980 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:39.980 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:40.243 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:40.243 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.243 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:40.243 null0 00:38:40.243 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.243 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.243 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:40.502 null1 00:38:40.502 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.502 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.502 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:40.762 null2 00:38:40.762 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.762 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.762 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:40.762 null3 00:38:40.762 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.762 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.762 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:41.021 null4 00:38:41.021 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.021 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.021 13:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:41.279 null5 00:38:41.279 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.279 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.280 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:41.280 null6 00:38:41.280 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.280 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.280 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:41.539 null7 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 599819 599821 599826 599830 599833 599838 599840 599842 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.539 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.799 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.057 13:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.316 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.576 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.576 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.576 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.577 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.836 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.837 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.096 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.096 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.096 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.096 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.096 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.096 13:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.096 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.096 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.096 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.097 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.097 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.097 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.097 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.097 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.356 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.616 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.875 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.135 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.135 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.135 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.135 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.135 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.135 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.396 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.655 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.914 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.173 13:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.173 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.433 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:45.692 rmmod nvme_tcp 00:38:45.692 rmmod nvme_fabrics 00:38:45.692 rmmod nvme_keyring 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 593996 ']' 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 593996 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 593996 ']' 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 593996 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:38:45.692 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:45.693 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 593996 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 593996' 00:38:45.952 killing process with pid 593996 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 593996 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 593996 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.952 13:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:48.490 00:38:48.490 real 0m46.876s 00:38:48.490 user 2m54.804s 00:38:48.490 sys 0m20.175s 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.490 ************************************ 00:38:48.490 END TEST nvmf_ns_hotplug_stress 00:38:48.490 ************************************ 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:48.490 ************************************ 00:38:48.490 START TEST nvmf_delete_subsystem 00:38:48.490 ************************************ 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:48.490 * Looking for test storage... 00:38:48.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:48.490 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:48.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.491 --rc genhtml_branch_coverage=1 00:38:48.491 --rc genhtml_function_coverage=1 00:38:48.491 --rc genhtml_legend=1 00:38:48.491 --rc geninfo_all_blocks=1 00:38:48.491 --rc geninfo_unexecuted_blocks=1 00:38:48.491 00:38:48.491 ' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:48.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.491 --rc genhtml_branch_coverage=1 00:38:48.491 --rc genhtml_function_coverage=1 00:38:48.491 --rc genhtml_legend=1 00:38:48.491 --rc geninfo_all_blocks=1 00:38:48.491 --rc geninfo_unexecuted_blocks=1 00:38:48.491 00:38:48.491 ' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:48.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.491 --rc genhtml_branch_coverage=1 00:38:48.491 --rc genhtml_function_coverage=1 00:38:48.491 --rc genhtml_legend=1 00:38:48.491 --rc geninfo_all_blocks=1 00:38:48.491 --rc geninfo_unexecuted_blocks=1 00:38:48.491 00:38:48.491 ' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:48.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.491 --rc genhtml_branch_coverage=1 00:38:48.491 --rc genhtml_function_coverage=1 00:38:48.491 --rc genhtml_legend=1 00:38:48.491 --rc geninfo_all_blocks=1 00:38:48.491 --rc geninfo_unexecuted_blocks=1 00:38:48.491 00:38:48.491 ' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:48.491 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.492 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.492 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.492 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:48.492 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:48.492 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:48.492 13:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:53.768 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:53.768 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:53.768 Found net devices under 0000:af:00.0: cvl_0_0 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:53.768 Found net devices under 0000:af:00.1: cvl_0_1 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:53.768 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:53.769 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:53.769 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.028 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.028 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.028 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:54.028 13:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:54.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:38:54.028 00:38:54.028 --- 10.0.0.2 ping statistics --- 00:38:54.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.028 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:38:54.028 00:38:54.028 --- 10.0.0.1 ping statistics --- 00:38:54.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.028 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=604008 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 604008 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 604008 ']' 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:54.028 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.288 [2024-12-16 13:00:20.131866] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:54.288 [2024-12-16 13:00:20.132823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:54.288 [2024-12-16 13:00:20.132857] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.288 [2024-12-16 13:00:20.202538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:54.288 [2024-12-16 13:00:20.241682] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.288 [2024-12-16 13:00:20.241718] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.288 [2024-12-16 13:00:20.241727] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.288 [2024-12-16 13:00:20.241734] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.288 [2024-12-16 13:00:20.241739] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.288 [2024-12-16 13:00:20.241787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.288 [2024-12-16 13:00:20.241787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.288 [2024-12-16 13:00:20.303778] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:54.288 [2024-12-16 13:00:20.304345] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:54.288 [2024-12-16 13:00:20.304526] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:54.288 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:54.288 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:38:54.288 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:54.288 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:54.288 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 [2024-12-16 13:00:20.382596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 [2024-12-16 13:00:20.426916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 NULL1 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 Delay0 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=604148 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:54.547 13:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:54.547 [2024-12-16 13:00:20.528180] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:56.452 13:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:56.452 13:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.452 13:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 [2024-12-16 13:00:22.567261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a7400d640 is same with the state(6) to be set 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Read completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 Write completed with error (sct=0, sc=8) 00:38:56.711 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 Read completed with error (sct=0, sc=8) 00:38:56.712 starting I/O failed: -6 00:38:56.712 Write completed with error (sct=0, sc=8) 00:38:56.712 [2024-12-16 13:00:22.567960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81eed0 is same with the state(6) to be set 00:38:57.648 [2024-12-16 13:00:23.542438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81cb20 is same with the state(6) to be set 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 [2024-12-16 13:00:23.569698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81da70 is same with the state(6) to be set 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 [2024-12-16 13:00:23.569898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81dc50 is same with the state(6) to be set 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 [2024-12-16 13:00:23.569990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a7400d310 is same with the state(6) to be set 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Read completed with error (sct=0, sc=8) 00:38:57.648 Write completed with error (sct=0, sc=8) 00:38:57.649 Write completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Write completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Write completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 Write completed with error (sct=0, sc=8) 00:38:57.649 Read completed with error (sct=0, sc=8) 00:38:57.649 [2024-12-16 13:00:23.570667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f0b0 is same with the state(6) to be set 00:38:57.649 13:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.649 Initializing NVMe Controllers 00:38:57.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:57.649 Controller IO queue size 128, less than required. 00:38:57.649 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:57.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:57.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:57.649 Initialization complete. Launching workers. 00:38:57.649 ======================================================== 00:38:57.649 Latency(us) 00:38:57.649 Device Information : IOPS MiB/s Average min max 00:38:57.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.54 0.10 943384.27 1253.59 1010888.74 00:38:57.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.82 0.08 866810.81 396.76 1012006.93 00:38:57.649 ======================================================== 00:38:57.649 Total : 353.35 0.17 909184.33 396.76 1012006.93 00:38:57.649 00:38:57.649 [2024-12-16 13:00:23.571461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81cb20 (9): Bad file descriptor 00:38:57.649 13:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:57.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:57.649 13:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 604148 00:38:57.649 13:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 604148 00:38:58.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (604148) - No such process 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 604148 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 604148 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 604148 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.216 [2024-12-16 13:00:24.094747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.216 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=604695 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:38:58.217 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:58.217 [2024-12-16 13:00:24.166191] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:58.785 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.785 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:38:58.785 13:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.352 13:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:59.352 13:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:38:59.352 13:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.610 13:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:59.610 13:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:38:59.610 13:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.177 13:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.177 13:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:39:00.177 13:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.744 13:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.744 13:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:39:00.744 13:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:01.311 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:01.311 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:39:01.311 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:01.311 Initializing NVMe Controllers 00:39:01.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:01.311 Controller IO queue size 128, less than required. 00:39:01.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:01.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:01.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:01.311 Initialization complete. Launching workers. 00:39:01.311 ======================================================== 00:39:01.311 Latency(us) 00:39:01.311 Device Information : IOPS MiB/s Average min max 00:39:01.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002270.43 1000139.24 1041594.14 00:39:01.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004347.00 1000163.82 1041577.53 00:39:01.311 ======================================================== 00:39:01.311 Total : 256.00 0.12 1003308.71 1000139.24 1041594.14 00:39:01.311 00:39:01.879 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 604695 00:39:01.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (604695) - No such process 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 604695 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.880 rmmod nvme_tcp 00:39:01.880 rmmod nvme_fabrics 00:39:01.880 rmmod nvme_keyring 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 604008 ']' 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 604008 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 604008 ']' 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 604008 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 604008 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 604008' 00:39:01.880 killing process with pid 604008 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 604008 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 604008 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:01.880 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:39:02.139 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:02.139 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:02.139 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.139 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:02.139 13:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.044 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:04.045 00:39:04.045 real 0m15.929s 00:39:04.045 user 0m25.840s 00:39:04.045 sys 0m5.969s 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:04.045 ************************************ 00:39:04.045 END TEST nvmf_delete_subsystem 00:39:04.045 ************************************ 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:04.045 ************************************ 00:39:04.045 START TEST nvmf_host_management 00:39:04.045 ************************************ 00:39:04.045 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:04.305 * Looking for test storage... 00:39:04.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.305 --rc genhtml_branch_coverage=1 00:39:04.305 --rc genhtml_function_coverage=1 00:39:04.305 --rc genhtml_legend=1 00:39:04.305 --rc geninfo_all_blocks=1 00:39:04.305 --rc geninfo_unexecuted_blocks=1 00:39:04.305 00:39:04.305 ' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.305 --rc genhtml_branch_coverage=1 00:39:04.305 --rc genhtml_function_coverage=1 00:39:04.305 --rc genhtml_legend=1 00:39:04.305 --rc geninfo_all_blocks=1 00:39:04.305 --rc geninfo_unexecuted_blocks=1 00:39:04.305 00:39:04.305 ' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.305 --rc genhtml_branch_coverage=1 00:39:04.305 --rc genhtml_function_coverage=1 00:39:04.305 --rc genhtml_legend=1 00:39:04.305 --rc geninfo_all_blocks=1 00:39:04.305 --rc geninfo_unexecuted_blocks=1 00:39:04.305 00:39:04.305 ' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.305 --rc genhtml_branch_coverage=1 00:39:04.305 --rc genhtml_function_coverage=1 00:39:04.305 --rc genhtml_legend=1 00:39:04.305 --rc geninfo_all_blocks=1 00:39:04.305 --rc geninfo_unexecuted_blocks=1 00:39:04.305 00:39:04.305 ' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.305 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:04.306 13:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:10.878 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:10.878 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:10.878 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:10.879 Found net devices under 0000:af:00.0: cvl_0_0 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:10.879 Found net devices under 0000:af:00.1: cvl_0_1 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.879 13:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:10.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:39:10.879 00:39:10.879 --- 10.0.0.2 ping statistics --- 00:39:10.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.879 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:39:10.879 00:39:10.879 --- 10.0.0.1 ping statistics --- 00:39:10.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.879 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=608603 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 608603 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 608603 ']' 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:10.879 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.879 [2024-12-16 13:00:36.229728] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:10.879 [2024-12-16 13:00:36.230634] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:10.879 [2024-12-16 13:00:36.230668] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:10.879 [2024-12-16 13:00:36.298766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:10.880 [2024-12-16 13:00:36.356572] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:10.880 [2024-12-16 13:00:36.356614] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:10.880 [2024-12-16 13:00:36.356627] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:10.880 [2024-12-16 13:00:36.356637] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:10.880 [2024-12-16 13:00:36.356645] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:10.880 [2024-12-16 13:00:36.356713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:10.880 [2024-12-16 13:00:36.356820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:10.880 [2024-12-16 13:00:36.356929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:39:10.880 [2024-12-16 13:00:36.356931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.880 [2024-12-16 13:00:36.442280] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:10.880 [2024-12-16 13:00:36.443021] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:10.880 [2024-12-16 13:00:36.443330] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:10.880 [2024-12-16 13:00:36.443741] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:10.880 [2024-12-16 13:00:36.443778] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.880 [2024-12-16 13:00:36.507370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.880 Malloc0 00:39:10.880 [2024-12-16 13:00:36.582009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=608822 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 608822 /var/tmp/bdevperf.sock 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 608822 ']' 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:10.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:10.880 { 00:39:10.880 "params": { 00:39:10.880 "name": "Nvme$subsystem", 00:39:10.880 "trtype": "$TEST_TRANSPORT", 00:39:10.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.880 "adrfam": "ipv4", 00:39:10.880 "trsvcid": "$NVMF_PORT", 00:39:10.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.880 "hdgst": ${hdgst:-false}, 00:39:10.880 "ddgst": ${ddgst:-false} 00:39:10.880 }, 00:39:10.880 "method": "bdev_nvme_attach_controller" 00:39:10.880 } 00:39:10.880 EOF 00:39:10.880 )") 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:39:10.880 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:10.880 "params": { 00:39:10.880 "name": "Nvme0", 00:39:10.880 "trtype": "tcp", 00:39:10.880 "traddr": "10.0.0.2", 00:39:10.880 "adrfam": "ipv4", 00:39:10.880 "trsvcid": "4420", 00:39:10.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.880 "hdgst": false, 00:39:10.880 "ddgst": false 00:39:10.880 }, 00:39:10.880 "method": "bdev_nvme_attach_controller" 00:39:10.880 }' 00:39:10.880 [2024-12-16 13:00:36.676669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:10.880 [2024-12-16 13:00:36.676716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608822 ] 00:39:10.880 [2024-12-16 13:00:36.744274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.880 [2024-12-16 13:00:36.782937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.880 Running I/O for 10 seconds... 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.140 13:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.140 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.140 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:39:11.140 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:39:11.140 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.401 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.401 [2024-12-16 13:00:37.329611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84020 is same with the state(6) to be set 00:39:11.401 [2024-12-16 13:00:37.329772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.401 [2024-12-16 13:00:37.329976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.401 [2024-12-16 13:00:37.329983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.329990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.329998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.402 [2024-12-16 13:00:37.330569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.402 [2024-12-16 13:00:37.330577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.403 [2024-12-16 13:00:37.330742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.330806] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16bc070 was disconnected and freed. reset controller. 00:39:11.403 [2024-12-16 13:00:37.331718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:11.403 task offset: 101248 on job bdev=Nvme0n1 fails 00:39:11.403 00:39:11.403 Latency(us) 00:39:11.403 [2024-12-16T12:00:37.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.403 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:11.403 Job: Nvme0n1 ended in about 0.40 seconds with error 00:39:11.403 Verification LBA range: start 0x0 length 0x400 00:39:11.403 Nvme0n1 : 0.40 1925.33 120.33 160.44 0.00 29863.13 1396.54 27088.21 00:39:11.403 [2024-12-16T12:00:37.470Z] =================================================================================================================== 00:39:11.403 [2024-12-16T12:00:37.470Z] Total : 1925.33 120.33 160.44 0.00 29863.13 1396.54 27088.21 00:39:11.403 [2024-12-16 13:00:37.334074] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:11.403 [2024-12-16 13:00:37.334096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a2e90 (9): Bad file descriptor 00:39:11.403 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.403 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:11.403 [2024-12-16 13:00:37.335031] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:39:11.403 [2024-12-16 13:00:37.335109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:39:11.403 [2024-12-16 13:00:37.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.403 [2024-12-16 13:00:37.335150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:39:11.403 [2024-12-16 13:00:37.335158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:39:11.403 [2024-12-16 13:00:37.335167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.403 [2024-12-16 13:00:37.335173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14a2e90 00:39:11.403 [2024-12-16 13:00:37.335192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a2e90 (9): Bad file descriptor 00:39:11.403 [2024-12-16 13:00:37.335203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:11.403 [2024-12-16 13:00:37.335210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:11.403 [2024-12-16 13:00:37.335218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:11.403 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.403 [2024-12-16 13:00:37.335229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:11.403 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.403 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.403 13:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:12.340 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 608822 00:39:12.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (608822) - No such process 00:39:12.340 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:12.340 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:12.340 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:12.340 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:12.340 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:12.341 { 00:39:12.341 "params": { 00:39:12.341 "name": "Nvme$subsystem", 00:39:12.341 "trtype": "$TEST_TRANSPORT", 00:39:12.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.341 "adrfam": "ipv4", 00:39:12.341 "trsvcid": "$NVMF_PORT", 00:39:12.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.341 "hdgst": ${hdgst:-false}, 00:39:12.341 "ddgst": ${ddgst:-false} 00:39:12.341 }, 00:39:12.341 "method": "bdev_nvme_attach_controller" 00:39:12.341 } 00:39:12.341 EOF 00:39:12.341 )") 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:39:12.341 13:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:12.341 "params": { 00:39:12.341 "name": "Nvme0", 00:39:12.341 "trtype": "tcp", 00:39:12.341 "traddr": "10.0.0.2", 00:39:12.341 "adrfam": "ipv4", 00:39:12.341 "trsvcid": "4420", 00:39:12.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.341 "hdgst": false, 00:39:12.341 "ddgst": false 00:39:12.341 }, 00:39:12.341 "method": "bdev_nvme_attach_controller" 00:39:12.341 }' 00:39:12.341 [2024-12-16 13:00:38.399254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:12.341 [2024-12-16 13:00:38.399300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609098 ] 00:39:12.600 [2024-12-16 13:00:38.465937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.600 [2024-12-16 13:00:38.502866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.600 Running I/O for 1 seconds... 00:39:13.978 2134.00 IOPS, 133.38 MiB/s 00:39:13.978 Latency(us) 00:39:13.978 [2024-12-16T12:00:40.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.978 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:13.978 Verification LBA range: start 0x0 length 0x400 00:39:13.978 Nvme0n1 : 1.01 2183.94 136.50 0.00 0.00 28720.30 1685.21 30833.13 00:39:13.978 [2024-12-16T12:00:40.045Z] =================================================================================================================== 00:39:13.978 [2024-12-16T12:00:40.045Z] Total : 2183.94 136.50 0.00 0.00 28720.30 1685.21 30833.13 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.978 rmmod nvme_tcp 00:39:13.978 rmmod nvme_fabrics 00:39:13.978 rmmod nvme_keyring 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 608603 ']' 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 608603 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 608603 ']' 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 608603 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 608603 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 608603' 00:39:13.978 killing process with pid 608603 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 608603 00:39:13.978 13:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 608603 00:39:14.238 [2024-12-16 13:00:40.155247] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.238 13:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:16.775 00:39:16.775 real 0m12.201s 00:39:16.775 user 0m17.039s 00:39:16.775 sys 0m6.439s 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:16.775 ************************************ 00:39:16.775 END TEST nvmf_host_management 00:39:16.775 ************************************ 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:16.775 ************************************ 00:39:16.775 START TEST nvmf_lvol 00:39:16.775 ************************************ 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:16.775 * Looking for test storage... 00:39:16.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:16.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.775 --rc genhtml_branch_coverage=1 00:39:16.775 --rc genhtml_function_coverage=1 00:39:16.775 --rc genhtml_legend=1 00:39:16.775 --rc geninfo_all_blocks=1 00:39:16.775 --rc geninfo_unexecuted_blocks=1 00:39:16.775 00:39:16.775 ' 00:39:16.775 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.776 --rc genhtml_branch_coverage=1 00:39:16.776 --rc genhtml_function_coverage=1 00:39:16.776 --rc genhtml_legend=1 00:39:16.776 --rc geninfo_all_blocks=1 00:39:16.776 --rc geninfo_unexecuted_blocks=1 00:39:16.776 00:39:16.776 ' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.776 --rc genhtml_branch_coverage=1 00:39:16.776 --rc genhtml_function_coverage=1 00:39:16.776 --rc genhtml_legend=1 00:39:16.776 --rc geninfo_all_blocks=1 00:39:16.776 --rc geninfo_unexecuted_blocks=1 00:39:16.776 00:39:16.776 ' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.776 --rc genhtml_branch_coverage=1 00:39:16.776 --rc genhtml_function_coverage=1 00:39:16.776 --rc genhtml_legend=1 00:39:16.776 --rc geninfo_all_blocks=1 00:39:16.776 --rc geninfo_unexecuted_blocks=1 00:39:16.776 00:39:16.776 ' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.776 13:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:22.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:22.053 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:22.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:22.054 Found net devices under 0000:af:00.0: cvl_0_0 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:22.054 Found net devices under 0000:af:00.1: cvl_0_1 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:22.054 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:22.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:22.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:39:22.313 00:39:22.313 --- 10.0.0.2 ping statistics --- 00:39:22.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.313 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:22.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:22.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:39:22.313 00:39:22.313 --- 10.0.0.1 ping statistics --- 00:39:22.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.313 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=612753 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 612753 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 612753 ']' 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.313 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:22.314 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.314 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:22.314 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.314 [2024-12-16 13:00:48.365169] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:22.314 [2024-12-16 13:00:48.366059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:22.314 [2024-12-16 13:00:48.366092] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.573 [2024-12-16 13:00:48.439072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:22.573 [2024-12-16 13:00:48.478936] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.573 [2024-12-16 13:00:48.478975] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.573 [2024-12-16 13:00:48.478983] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.573 [2024-12-16 13:00:48.478989] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.573 [2024-12-16 13:00:48.478994] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.573 [2024-12-16 13:00:48.479053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.573 [2024-12-16 13:00:48.479091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.573 [2024-12-16 13:00:48.479092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:22.573 [2024-12-16 13:00:48.550452] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:22.573 [2024-12-16 13:00:48.551186] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:22.573 [2024-12-16 13:00:48.551522] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:22.573 [2024-12-16 13:00:48.551680] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:22.573 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:22.832 [2024-12-16 13:00:48.783911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.832 13:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.091 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:23.091 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.350 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:23.350 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:23.609 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:23.869 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f4f444b0-b917-4c33-acc9-72cf2e27de7c 00:39:23.869 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f4f444b0-b917-4c33-acc9-72cf2e27de7c lvol 20 00:39:23.869 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=178c14e5-65e0-46f6-b288-2246d9e61e3e 00:39:23.869 13:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:24.128 13:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 178c14e5-65e0-46f6-b288-2246d9e61e3e 00:39:24.387 13:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:24.387 [2024-12-16 13:00:50.435814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.646 13:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.646 13:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:24.646 13:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=613051 00:39:24.646 13:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:26.024 13:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 178c14e5-65e0-46f6-b288-2246d9e61e3e MY_SNAPSHOT 00:39:26.024 13:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=60ec17da-ad92-439c-805b-4b0cc77ddd42 00:39:26.024 13:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 178c14e5-65e0-46f6-b288-2246d9e61e3e 30 00:39:26.283 13:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 60ec17da-ad92-439c-805b-4b0cc77ddd42 MY_CLONE 00:39:26.542 13:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9d100796-cad2-4932-a388-013f2251b96b 00:39:26.542 13:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9d100796-cad2-4932-a388-013f2251b96b 00:39:27.110 13:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 613051 00:39:35.240 Initializing NVMe Controllers 00:39:35.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:35.240 Controller IO queue size 128, less than required. 00:39:35.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:35.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:35.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:35.240 Initialization complete. Launching workers. 00:39:35.240 ======================================================== 00:39:35.240 Latency(us) 00:39:35.240 Device Information : IOPS MiB/s Average min max 00:39:35.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12560.60 49.06 10191.42 237.84 45364.87 00:39:35.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12331.60 48.17 10379.97 5407.00 47808.29 00:39:35.240 ======================================================== 00:39:35.240 Total : 24892.20 97.24 10284.83 237.84 47808.29 00:39:35.240 00:39:35.240 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:35.500 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 178c14e5-65e0-46f6-b288-2246d9e61e3e 00:39:35.500 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4f444b0-b917-4c33-acc9-72cf2e27de7c 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:35.759 rmmod nvme_tcp 00:39:35.759 rmmod nvme_fabrics 00:39:35.759 rmmod nvme_keyring 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 612753 ']' 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 612753 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 612753 ']' 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 612753 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:35.759 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 612753 00:39:36.019 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:36.019 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:36.019 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 612753' 00:39:36.019 killing process with pid 612753 00:39:36.019 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 612753 00:39:36.019 13:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 612753 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.019 13:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:38.555 00:39:38.555 real 0m21.820s 00:39:38.555 user 0m55.539s 00:39:38.555 sys 0m10.080s 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:38.555 ************************************ 00:39:38.555 END TEST nvmf_lvol 00:39:38.555 ************************************ 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:38.555 ************************************ 00:39:38.555 START TEST nvmf_lvs_grow 00:39:38.555 ************************************ 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:38.555 * Looking for test storage... 00:39:38.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.555 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.556 --rc genhtml_branch_coverage=1 00:39:38.556 --rc genhtml_function_coverage=1 00:39:38.556 --rc genhtml_legend=1 00:39:38.556 --rc geninfo_all_blocks=1 00:39:38.556 --rc geninfo_unexecuted_blocks=1 00:39:38.556 00:39:38.556 ' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.556 --rc genhtml_branch_coverage=1 00:39:38.556 --rc genhtml_function_coverage=1 00:39:38.556 --rc genhtml_legend=1 00:39:38.556 --rc geninfo_all_blocks=1 00:39:38.556 --rc geninfo_unexecuted_blocks=1 00:39:38.556 00:39:38.556 ' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.556 --rc genhtml_branch_coverage=1 00:39:38.556 --rc genhtml_function_coverage=1 00:39:38.556 --rc genhtml_legend=1 00:39:38.556 --rc geninfo_all_blocks=1 00:39:38.556 --rc geninfo_unexecuted_blocks=1 00:39:38.556 00:39:38.556 ' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.556 --rc genhtml_branch_coverage=1 00:39:38.556 --rc genhtml_function_coverage=1 00:39:38.556 --rc genhtml_legend=1 00:39:38.556 --rc geninfo_all_blocks=1 00:39:38.556 --rc geninfo_unexecuted_blocks=1 00:39:38.556 00:39:38.556 ' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:38.556 13:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:45.128 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:45.128 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:45.128 Found net devices under 0000:af:00.0: cvl_0_0 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:45.128 Found net devices under 0000:af:00.1: cvl_0_1 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:45.128 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:45.129 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:45.129 13:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:45.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:45.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:39:45.129 00:39:45.129 --- 10.0.0.2 ping statistics --- 00:39:45.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.129 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:45.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:45.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:39:45.129 00:39:45.129 --- 10.0.0.1 ping statistics --- 00:39:45.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.129 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=618260 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 618260 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 618260 ']' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.129 [2024-12-16 13:01:10.256510] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:45.129 [2024-12-16 13:01:10.257420] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:45.129 [2024-12-16 13:01:10.257454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.129 [2024-12-16 13:01:10.329797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.129 [2024-12-16 13:01:10.368393] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.129 [2024-12-16 13:01:10.368433] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.129 [2024-12-16 13:01:10.368440] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.129 [2024-12-16 13:01:10.368446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.129 [2024-12-16 13:01:10.368451] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.129 [2024-12-16 13:01:10.368486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.129 [2024-12-16 13:01:10.428987] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.129 [2024-12-16 13:01:10.429239] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:45.129 [2024-12-16 13:01:10.661146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.129 ************************************ 00:39:45.129 START TEST lvs_grow_clean 00:39:45.129 ************************************ 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:45.129 13:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:45.129 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:45.129 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:45.129 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:45.389 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:45.389 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:45.389 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2560122c-1278-4f26-bffe-f4f8964cbd53 lvol 150 00:39:45.648 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2679cb87-ec9b-49ea-b132-7bc3f6548767 00:39:45.648 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.648 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:45.648 [2024-12-16 13:01:11.672873] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:45.648 [2024-12-16 13:01:11.673016] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:45.648 true 00:39:45.648 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:45.649 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:45.908 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:45.908 13:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:46.167 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2679cb87-ec9b-49ea-b132-7bc3f6548767 00:39:46.426 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:46.426 [2024-12-16 13:01:12.441375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.426 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=618556 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 618556 /var/tmp/bdevperf.sock 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 618556 ']' 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:46.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:46.685 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:46.685 [2024-12-16 13:01:12.695980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:46.685 [2024-12-16 13:01:12.696028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618556 ] 00:39:46.944 [2024-12-16 13:01:12.766204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.944 [2024-12-16 13:01:12.805462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.944 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:46.944 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:39:46.944 13:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:47.203 Nvme0n1 00:39:47.203 13:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:47.462 [ 00:39:47.462 { 00:39:47.462 "name": "Nvme0n1", 00:39:47.462 "aliases": [ 00:39:47.462 "2679cb87-ec9b-49ea-b132-7bc3f6548767" 00:39:47.462 ], 00:39:47.462 "product_name": "NVMe disk", 00:39:47.462 "block_size": 4096, 00:39:47.462 "num_blocks": 38912, 00:39:47.462 "uuid": "2679cb87-ec9b-49ea-b132-7bc3f6548767", 00:39:47.462 "numa_id": 1, 00:39:47.462 "assigned_rate_limits": { 00:39:47.462 "rw_ios_per_sec": 0, 00:39:47.462 "rw_mbytes_per_sec": 0, 00:39:47.462 "r_mbytes_per_sec": 0, 00:39:47.462 "w_mbytes_per_sec": 0 00:39:47.462 }, 00:39:47.462 "claimed": false, 00:39:47.462 "zoned": false, 00:39:47.462 "supported_io_types": { 00:39:47.462 "read": true, 00:39:47.462 "write": true, 00:39:47.462 "unmap": true, 00:39:47.462 "flush": true, 00:39:47.462 "reset": true, 00:39:47.462 "nvme_admin": true, 00:39:47.462 "nvme_io": true, 00:39:47.462 "nvme_io_md": false, 00:39:47.462 "write_zeroes": true, 00:39:47.462 "zcopy": false, 00:39:47.462 "get_zone_info": false, 00:39:47.462 "zone_management": false, 00:39:47.462 "zone_append": false, 00:39:47.462 "compare": true, 00:39:47.462 "compare_and_write": true, 00:39:47.462 "abort": true, 00:39:47.462 "seek_hole": false, 00:39:47.462 "seek_data": false, 00:39:47.462 "copy": true, 00:39:47.462 "nvme_iov_md": false 00:39:47.462 }, 00:39:47.462 "memory_domains": [ 00:39:47.462 { 00:39:47.462 "dma_device_id": "system", 00:39:47.462 "dma_device_type": 1 00:39:47.462 } 00:39:47.462 ], 00:39:47.462 "driver_specific": { 00:39:47.462 "nvme": [ 00:39:47.462 { 00:39:47.462 "trid": { 00:39:47.463 "trtype": "TCP", 00:39:47.463 "adrfam": "IPv4", 00:39:47.463 "traddr": "10.0.0.2", 00:39:47.463 "trsvcid": "4420", 00:39:47.463 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:47.463 }, 00:39:47.463 "ctrlr_data": { 00:39:47.463 "cntlid": 1, 00:39:47.463 "vendor_id": "0x8086", 00:39:47.463 "model_number": "SPDK bdev Controller", 00:39:47.463 "serial_number": "SPDK0", 00:39:47.463 "firmware_revision": "24.09.1", 00:39:47.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:47.463 "oacs": { 00:39:47.463 "security": 0, 00:39:47.463 "format": 0, 00:39:47.463 "firmware": 0, 00:39:47.463 "ns_manage": 0 00:39:47.463 }, 00:39:47.463 "multi_ctrlr": true, 00:39:47.463 "ana_reporting": false 00:39:47.463 }, 00:39:47.463 "vs": { 00:39:47.463 "nvme_version": "1.3" 00:39:47.463 }, 00:39:47.463 "ns_data": { 00:39:47.463 "id": 1, 00:39:47.463 "can_share": true 00:39:47.463 } 00:39:47.463 } 00:39:47.463 ], 00:39:47.463 "mp_policy": "active_passive" 00:39:47.463 } 00:39:47.463 } 00:39:47.463 ] 00:39:47.463 13:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=618778 00:39:47.463 13:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:47.463 13:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:47.463 Running I/O for 10 seconds... 00:39:48.400 Latency(us) 00:39:48.400 [2024-12-16T12:01:14.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.400 Nvme0n1 : 1.00 22648.00 88.47 0.00 0.00 0.00 0.00 0.00 00:39:48.400 [2024-12-16T12:01:14.467Z] =================================================================================================================== 00:39:48.400 [2024-12-16T12:01:14.467Z] Total : 22648.00 88.47 0.00 0.00 0.00 0.00 0.00 00:39:48.400 00:39:49.337 13:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:49.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.596 Nvme0n1 : 2.00 23040.00 90.00 0.00 0.00 0.00 0.00 0.00 00:39:49.596 [2024-12-16T12:01:15.663Z] =================================================================================================================== 00:39:49.596 [2024-12-16T12:01:15.663Z] Total : 23040.00 90.00 0.00 0.00 0.00 0.00 0.00 00:39:49.596 00:39:49.596 true 00:39:49.596 13:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:49.596 13:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:49.854 13:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:49.854 13:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:49.854 13:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 618778 00:39:50.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.422 Nvme0n1 : 3.00 23161.00 90.47 0.00 0.00 0.00 0.00 0.00 00:39:50.422 [2024-12-16T12:01:16.489Z] =================================================================================================================== 00:39:50.422 [2024-12-16T12:01:16.489Z] Total : 23161.00 90.47 0.00 0.00 0.00 0.00 0.00 00:39:50.422 00:39:51.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.799 Nvme0n1 : 4.00 23251.25 90.83 0.00 0.00 0.00 0.00 0.00 00:39:51.799 [2024-12-16T12:01:17.866Z] =================================================================================================================== 00:39:51.799 [2024-12-16T12:01:17.866Z] Total : 23251.25 90.83 0.00 0.00 0.00 0.00 0.00 00:39:51.799 00:39:52.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.735 Nvme0n1 : 5.00 23343.80 91.19 0.00 0.00 0.00 0.00 0.00 00:39:52.735 [2024-12-16T12:01:18.802Z] =================================================================================================================== 00:39:52.735 [2024-12-16T12:01:18.802Z] Total : 23343.80 91.19 0.00 0.00 0.00 0.00 0.00 00:39:52.735 00:39:53.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.671 Nvme0n1 : 6.00 23385.67 91.35 0.00 0.00 0.00 0.00 0.00 00:39:53.671 [2024-12-16T12:01:19.738Z] =================================================================================================================== 00:39:53.671 [2024-12-16T12:01:19.738Z] Total : 23385.67 91.35 0.00 0.00 0.00 0.00 0.00 00:39:53.671 00:39:54.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:54.608 Nvme0n1 : 7.00 23386.00 91.35 0.00 0.00 0.00 0.00 0.00 00:39:54.608 [2024-12-16T12:01:20.675Z] =================================================================================================================== 00:39:54.608 [2024-12-16T12:01:20.675Z] Total : 23386.00 91.35 0.00 0.00 0.00 0.00 0.00 00:39:54.608 00:39:55.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.546 Nvme0n1 : 8.00 23408.38 91.44 0.00 0.00 0.00 0.00 0.00 00:39:55.546 [2024-12-16T12:01:21.613Z] =================================================================================================================== 00:39:55.546 [2024-12-16T12:01:21.613Z] Total : 23408.38 91.44 0.00 0.00 0.00 0.00 0.00 00:39:55.546 00:39:56.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.483 Nvme0n1 : 9.00 23425.89 91.51 0.00 0.00 0.00 0.00 0.00 00:39:56.483 [2024-12-16T12:01:22.550Z] =================================================================================================================== 00:39:56.483 [2024-12-16T12:01:22.550Z] Total : 23425.89 91.51 0.00 0.00 0.00 0.00 0.00 00:39:56.483 00:39:57.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.420 Nvme0n1 : 10.00 23456.70 91.63 0.00 0.00 0.00 0.00 0.00 00:39:57.420 [2024-12-16T12:01:23.487Z] =================================================================================================================== 00:39:57.420 [2024-12-16T12:01:23.487Z] Total : 23456.70 91.63 0.00 0.00 0.00 0.00 0.00 00:39:57.420 00:39:57.420 00:39:57.420 Latency(us) 00:39:57.420 [2024-12-16T12:01:23.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.421 Nvme0n1 : 10.01 23456.09 91.63 0.00 0.00 5453.76 3105.16 26963.38 00:39:57.421 [2024-12-16T12:01:23.488Z] =================================================================================================================== 00:39:57.421 [2024-12-16T12:01:23.488Z] Total : 23456.09 91.63 0.00 0.00 5453.76 3105.16 26963.38 00:39:57.421 { 00:39:57.421 "results": [ 00:39:57.421 { 00:39:57.421 "job": "Nvme0n1", 00:39:57.421 "core_mask": "0x2", 00:39:57.421 "workload": "randwrite", 00:39:57.421 "status": "finished", 00:39:57.421 "queue_depth": 128, 00:39:57.421 "io_size": 4096, 00:39:57.421 "runtime": 10.005716, 00:39:57.421 "iops": 23456.092497528414, 00:39:57.421 "mibps": 91.62536131847037, 00:39:57.421 "io_failed": 0, 00:39:57.421 "io_timeout": 0, 00:39:57.421 "avg_latency_us": 5453.762498854136, 00:39:57.421 "min_latency_us": 3105.158095238095, 00:39:57.421 "max_latency_us": 26963.382857142857 00:39:57.421 } 00:39:57.421 ], 00:39:57.421 "core_count": 1 00:39:57.421 } 00:39:57.421 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 618556 00:39:57.421 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 618556 ']' 00:39:57.421 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 618556 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 618556 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 618556' 00:39:57.680 killing process with pid 618556 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 618556 00:39:57.680 Received shutdown signal, test time was about 10.000000 seconds 00:39:57.680 00:39:57.680 Latency(us) 00:39:57.680 [2024-12-16T12:01:23.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.680 [2024-12-16T12:01:23.747Z] =================================================================================================================== 00:39:57.680 [2024-12-16T12:01:23.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 618556 00:39:57.680 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:57.939 13:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:58.199 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:58.199 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:58.458 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:58.458 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:58.458 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:58.458 [2024-12-16 13:01:24.484937] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:58.717 request: 00:39:58.717 { 00:39:58.717 "uuid": "2560122c-1278-4f26-bffe-f4f8964cbd53", 00:39:58.717 "method": "bdev_lvol_get_lvstores", 00:39:58.717 "req_id": 1 00:39:58.717 } 00:39:58.717 Got JSON-RPC error response 00:39:58.717 response: 00:39:58.717 { 00:39:58.717 "code": -19, 00:39:58.717 "message": "No such device" 00:39:58.717 } 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:58.717 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:58.975 aio_bdev 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2679cb87-ec9b-49ea-b132-7bc3f6548767 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2679cb87-ec9b-49ea-b132-7bc3f6548767 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:58.975 13:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:59.239 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2679cb87-ec9b-49ea-b132-7bc3f6548767 -t 2000 00:39:59.239 [ 00:39:59.239 { 00:39:59.239 "name": "2679cb87-ec9b-49ea-b132-7bc3f6548767", 00:39:59.239 "aliases": [ 00:39:59.239 "lvs/lvol" 00:39:59.239 ], 00:39:59.239 "product_name": "Logical Volume", 00:39:59.239 "block_size": 4096, 00:39:59.239 "num_blocks": 38912, 00:39:59.240 "uuid": "2679cb87-ec9b-49ea-b132-7bc3f6548767", 00:39:59.240 "assigned_rate_limits": { 00:39:59.240 "rw_ios_per_sec": 0, 00:39:59.240 "rw_mbytes_per_sec": 0, 00:39:59.240 "r_mbytes_per_sec": 0, 00:39:59.240 "w_mbytes_per_sec": 0 00:39:59.240 }, 00:39:59.240 "claimed": false, 00:39:59.240 "zoned": false, 00:39:59.240 "supported_io_types": { 00:39:59.240 "read": true, 00:39:59.240 "write": true, 00:39:59.240 "unmap": true, 00:39:59.240 "flush": false, 00:39:59.240 "reset": true, 00:39:59.240 "nvme_admin": false, 00:39:59.240 "nvme_io": false, 00:39:59.240 "nvme_io_md": false, 00:39:59.240 "write_zeroes": true, 00:39:59.240 "zcopy": false, 00:39:59.240 "get_zone_info": false, 00:39:59.240 "zone_management": false, 00:39:59.240 "zone_append": false, 00:39:59.240 "compare": false, 00:39:59.240 "compare_and_write": false, 00:39:59.240 "abort": false, 00:39:59.240 "seek_hole": true, 00:39:59.240 "seek_data": true, 00:39:59.240 "copy": false, 00:39:59.240 "nvme_iov_md": false 00:39:59.240 }, 00:39:59.240 "driver_specific": { 00:39:59.240 "lvol": { 00:39:59.240 "lvol_store_uuid": "2560122c-1278-4f26-bffe-f4f8964cbd53", 00:39:59.240 "base_bdev": "aio_bdev", 00:39:59.240 "thin_provision": false, 00:39:59.240 "num_allocated_clusters": 38, 00:39:59.240 "snapshot": false, 00:39:59.240 "clone": false, 00:39:59.240 "esnap_clone": false 00:39:59.240 } 00:39:59.240 } 00:39:59.240 } 00:39:59.240 ] 00:39:59.240 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:39:59.240 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:59.240 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:59.498 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:59.498 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:39:59.498 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:59.757 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:59.757 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2679cb87-ec9b-49ea-b132-7bc3f6548767 00:40:00.016 13:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2560122c-1278-4f26-bffe-f4f8964cbd53 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:00.275 00:40:00.275 real 0m15.614s 00:40:00.275 user 0m15.102s 00:40:00.275 sys 0m1.460s 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:00.275 ************************************ 00:40:00.275 END TEST lvs_grow_clean 00:40:00.275 ************************************ 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:00.275 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:00.534 ************************************ 00:40:00.534 START TEST lvs_grow_dirty 00:40:00.534 ************************************ 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:00.534 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:00.793 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:00.794 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:00.794 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:01.052 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:01.052 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:01.052 13:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 lvol 150 00:40:01.311 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:01.311 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:01.311 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:01.311 [2024-12-16 13:01:27.340869] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:01.311 [2024-12-16 13:01:27.340992] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:01.311 true 00:40:01.311 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:01.311 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:01.570 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:01.570 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:01.829 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:02.088 13:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:02.088 [2024-12-16 13:01:28.089307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:02.088 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=621050 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 621050 /var/tmp/bdevperf.sock 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 621050 ']' 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:02.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:02.347 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:02.347 [2024-12-16 13:01:28.332825] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:02.347 [2024-12-16 13:01:28.332872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621050 ] 00:40:02.347 [2024-12-16 13:01:28.400076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.607 [2024-12-16 13:01:28.438559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.607 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:02.607 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:02.607 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:02.866 Nvme0n1 00:40:03.125 13:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:03.125 [ 00:40:03.125 { 00:40:03.125 "name": "Nvme0n1", 00:40:03.125 "aliases": [ 00:40:03.125 "73b1b7da-e5d4-455d-8617-3dce4b31a458" 00:40:03.125 ], 00:40:03.126 "product_name": "NVMe disk", 00:40:03.126 "block_size": 4096, 00:40:03.126 "num_blocks": 38912, 00:40:03.126 "uuid": "73b1b7da-e5d4-455d-8617-3dce4b31a458", 00:40:03.126 "numa_id": 1, 00:40:03.126 "assigned_rate_limits": { 00:40:03.126 "rw_ios_per_sec": 0, 00:40:03.126 "rw_mbytes_per_sec": 0, 00:40:03.126 "r_mbytes_per_sec": 0, 00:40:03.126 "w_mbytes_per_sec": 0 00:40:03.126 }, 00:40:03.126 "claimed": false, 00:40:03.126 "zoned": false, 00:40:03.126 "supported_io_types": { 00:40:03.126 "read": true, 00:40:03.126 "write": true, 00:40:03.126 "unmap": true, 00:40:03.126 "flush": true, 00:40:03.126 "reset": true, 00:40:03.126 "nvme_admin": true, 00:40:03.126 "nvme_io": true, 00:40:03.126 "nvme_io_md": false, 00:40:03.126 "write_zeroes": true, 00:40:03.126 "zcopy": false, 00:40:03.126 "get_zone_info": false, 00:40:03.126 "zone_management": false, 00:40:03.126 "zone_append": false, 00:40:03.126 "compare": true, 00:40:03.126 "compare_and_write": true, 00:40:03.126 "abort": true, 00:40:03.126 "seek_hole": false, 00:40:03.126 "seek_data": false, 00:40:03.126 "copy": true, 00:40:03.126 "nvme_iov_md": false 00:40:03.126 }, 00:40:03.126 "memory_domains": [ 00:40:03.126 { 00:40:03.126 "dma_device_id": "system", 00:40:03.126 "dma_device_type": 1 00:40:03.126 } 00:40:03.126 ], 00:40:03.126 "driver_specific": { 00:40:03.126 "nvme": [ 00:40:03.126 { 00:40:03.126 "trid": { 00:40:03.126 "trtype": "TCP", 00:40:03.126 "adrfam": "IPv4", 00:40:03.126 "traddr": "10.0.0.2", 00:40:03.126 "trsvcid": "4420", 00:40:03.126 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:03.126 }, 00:40:03.126 "ctrlr_data": { 00:40:03.126 "cntlid": 1, 00:40:03.126 "vendor_id": "0x8086", 00:40:03.126 "model_number": "SPDK bdev Controller", 00:40:03.126 "serial_number": "SPDK0", 00:40:03.126 "firmware_revision": "24.09.1", 00:40:03.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.126 "oacs": { 00:40:03.126 "security": 0, 00:40:03.126 "format": 0, 00:40:03.126 "firmware": 0, 00:40:03.126 "ns_manage": 0 00:40:03.126 }, 00:40:03.126 "multi_ctrlr": true, 00:40:03.126 "ana_reporting": false 00:40:03.126 }, 00:40:03.126 "vs": { 00:40:03.126 "nvme_version": "1.3" 00:40:03.126 }, 00:40:03.126 "ns_data": { 00:40:03.126 "id": 1, 00:40:03.126 "can_share": true 00:40:03.126 } 00:40:03.126 } 00:40:03.126 ], 00:40:03.126 "mp_policy": "active_passive" 00:40:03.126 } 00:40:03.126 } 00:40:03.126 ] 00:40:03.126 13:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=621275 00:40:03.126 13:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:03.126 13:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:03.385 Running I/O for 10 seconds... 00:40:04.368 Latency(us) 00:40:04.368 [2024-12-16T12:01:30.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.368 Nvme0n1 : 1.00 22120.00 86.41 0.00 0.00 0.00 0.00 0.00 00:40:04.368 [2024-12-16T12:01:30.435Z] =================================================================================================================== 00:40:04.368 [2024-12-16T12:01:30.435Z] Total : 22120.00 86.41 0.00 0.00 0.00 0.00 0.00 00:40:04.368 00:40:05.419 13:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:05.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.419 Nvme0n1 : 2.00 22574.00 88.18 0.00 0.00 0.00 0.00 0.00 00:40:05.419 [2024-12-16T12:01:31.486Z] =================================================================================================================== 00:40:05.419 [2024-12-16T12:01:31.486Z] Total : 22574.00 88.18 0.00 0.00 0.00 0.00 0.00 00:40:05.419 00:40:05.419 true 00:40:05.419 13:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:05.419 13:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:05.678 13:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:05.678 13:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:05.678 13:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 621275 00:40:06.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.246 Nvme0n1 : 3.00 22764.00 88.92 0.00 0.00 0.00 0.00 0.00 00:40:06.246 [2024-12-16T12:01:32.313Z] =================================================================================================================== 00:40:06.246 [2024-12-16T12:01:32.313Z] Total : 22764.00 88.92 0.00 0.00 0.00 0.00 0.00 00:40:06.246 00:40:07.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.183 Nvme0n1 : 4.00 22955.50 89.67 0.00 0.00 0.00 0.00 0.00 00:40:07.184 [2024-12-16T12:01:33.251Z] =================================================================================================================== 00:40:07.184 [2024-12-16T12:01:33.251Z] Total : 22955.50 89.67 0.00 0.00 0.00 0.00 0.00 00:40:07.184 00:40:08.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.629 Nvme0n1 : 5.00 23074.40 90.13 0.00 0.00 0.00 0.00 0.00 00:40:08.630 [2024-12-16T12:01:34.697Z] =================================================================================================================== 00:40:08.630 [2024-12-16T12:01:34.697Z] Total : 23074.40 90.13 0.00 0.00 0.00 0.00 0.00 00:40:08.630 00:40:09.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:09.202 Nvme0n1 : 6.00 23160.67 90.47 0.00 0.00 0.00 0.00 0.00 00:40:09.202 [2024-12-16T12:01:35.269Z] =================================================================================================================== 00:40:09.202 [2024-12-16T12:01:35.269Z] Total : 23160.67 90.47 0.00 0.00 0.00 0.00 0.00 00:40:09.202 00:40:10.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.583 Nvme0n1 : 7.00 23239.86 90.78 0.00 0.00 0.00 0.00 0.00 00:40:10.583 [2024-12-16T12:01:36.650Z] =================================================================================================================== 00:40:10.583 [2024-12-16T12:01:36.650Z] Total : 23239.86 90.78 0.00 0.00 0.00 0.00 0.00 00:40:10.583 00:40:11.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.520 Nvme0n1 : 8.00 23279.25 90.93 0.00 0.00 0.00 0.00 0.00 00:40:11.520 [2024-12-16T12:01:37.587Z] =================================================================================================================== 00:40:11.520 [2024-12-16T12:01:37.587Z] Total : 23279.25 90.93 0.00 0.00 0.00 0.00 0.00 00:40:11.520 00:40:12.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.456 Nvme0n1 : 9.00 23331.78 91.14 0.00 0.00 0.00 0.00 0.00 00:40:12.456 [2024-12-16T12:01:38.523Z] =================================================================================================================== 00:40:12.456 [2024-12-16T12:01:38.523Z] Total : 23331.78 91.14 0.00 0.00 0.00 0.00 0.00 00:40:12.456 00:40:13.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.393 Nvme0n1 : 10.00 23364.10 91.27 0.00 0.00 0.00 0.00 0.00 00:40:13.393 [2024-12-16T12:01:39.460Z] =================================================================================================================== 00:40:13.393 [2024-12-16T12:01:39.460Z] Total : 23364.10 91.27 0.00 0.00 0.00 0.00 0.00 00:40:13.393 00:40:13.393 00:40:13.393 Latency(us) 00:40:13.393 [2024-12-16T12:01:39.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:13.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.393 Nvme0n1 : 10.01 23363.33 91.26 0.00 0.00 5475.86 3198.78 25590.25 00:40:13.393 [2024-12-16T12:01:39.460Z] =================================================================================================================== 00:40:13.393 [2024-12-16T12:01:39.460Z] Total : 23363.33 91.26 0.00 0.00 5475.86 3198.78 25590.25 00:40:13.393 { 00:40:13.393 "results": [ 00:40:13.393 { 00:40:13.393 "job": "Nvme0n1", 00:40:13.393 "core_mask": "0x2", 00:40:13.393 "workload": "randwrite", 00:40:13.393 "status": "finished", 00:40:13.393 "queue_depth": 128, 00:40:13.393 "io_size": 4096, 00:40:13.393 "runtime": 10.005809, 00:40:13.393 "iops": 23363.32824262386, 00:40:13.393 "mibps": 91.26300094774945, 00:40:13.393 "io_failed": 0, 00:40:13.393 "io_timeout": 0, 00:40:13.393 "avg_latency_us": 5475.864755997424, 00:40:13.393 "min_latency_us": 3198.7809523809524, 00:40:13.393 "max_latency_us": 25590.24761904762 00:40:13.393 } 00:40:13.393 ], 00:40:13.393 "core_count": 1 00:40:13.393 } 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 621050 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 621050 ']' 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 621050 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621050 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621050' 00:40:13.393 killing process with pid 621050 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 621050 00:40:13.393 Received shutdown signal, test time was about 10.000000 seconds 00:40:13.393 00:40:13.393 Latency(us) 00:40:13.393 [2024-12-16T12:01:39.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:13.393 [2024-12-16T12:01:39.460Z] =================================================================================================================== 00:40:13.393 [2024-12-16T12:01:39.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:13.393 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 621050 00:40:13.652 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:13.652 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:13.911 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:13.911 13:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 618260 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 618260 00:40:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 618260 Killed "${NVMF_APP[@]}" "$@" 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=622852 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 622852 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 622852 ']' 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:14.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:14.170 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:14.170 [2024-12-16 13:01:40.191169] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:14.170 [2024-12-16 13:01:40.192063] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:14.170 [2024-12-16 13:01:40.192097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:14.429 [2024-12-16 13:01:40.250361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.429 [2024-12-16 13:01:40.289183] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:14.429 [2024-12-16 13:01:40.289218] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:14.429 [2024-12-16 13:01:40.289225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:14.429 [2024-12-16 13:01:40.289231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:14.429 [2024-12-16 13:01:40.289236] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:14.429 [2024-12-16 13:01:40.289273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.429 [2024-12-16 13:01:40.350054] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:14.429 [2024-12-16 13:01:40.350297] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:14.429 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:14.429 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:14.430 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:14.430 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:14.430 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:14.430 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:14.430 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:14.689 [2024-12-16 13:01:40.590559] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:14.689 [2024-12-16 13:01:40.590759] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:14.689 [2024-12-16 13:01:40.590841] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:14.689 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:14.948 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 73b1b7da-e5d4-455d-8617-3dce4b31a458 -t 2000 00:40:14.948 [ 00:40:14.948 { 00:40:14.948 "name": "73b1b7da-e5d4-455d-8617-3dce4b31a458", 00:40:14.948 "aliases": [ 00:40:14.948 "lvs/lvol" 00:40:14.948 ], 00:40:14.948 "product_name": "Logical Volume", 00:40:14.948 "block_size": 4096, 00:40:14.948 "num_blocks": 38912, 00:40:14.948 "uuid": "73b1b7da-e5d4-455d-8617-3dce4b31a458", 00:40:14.948 "assigned_rate_limits": { 00:40:14.948 "rw_ios_per_sec": 0, 00:40:14.948 "rw_mbytes_per_sec": 0, 00:40:14.948 "r_mbytes_per_sec": 0, 00:40:14.948 "w_mbytes_per_sec": 0 00:40:14.948 }, 00:40:14.948 "claimed": false, 00:40:14.948 "zoned": false, 00:40:14.948 "supported_io_types": { 00:40:14.948 "read": true, 00:40:14.948 "write": true, 00:40:14.948 "unmap": true, 00:40:14.948 "flush": false, 00:40:14.948 "reset": true, 00:40:14.948 "nvme_admin": false, 00:40:14.948 "nvme_io": false, 00:40:14.948 "nvme_io_md": false, 00:40:14.948 "write_zeroes": true, 00:40:14.948 "zcopy": false, 00:40:14.948 "get_zone_info": false, 00:40:14.948 "zone_management": false, 00:40:14.948 "zone_append": false, 00:40:14.948 "compare": false, 00:40:14.948 "compare_and_write": false, 00:40:14.948 "abort": false, 00:40:14.948 "seek_hole": true, 00:40:14.948 "seek_data": true, 00:40:14.948 "copy": false, 00:40:14.948 "nvme_iov_md": false 00:40:14.948 }, 00:40:14.948 "driver_specific": { 00:40:14.948 "lvol": { 00:40:14.948 "lvol_store_uuid": "fc108a5c-6654-46ac-846a-a51ca4a87fc6", 00:40:14.948 "base_bdev": "aio_bdev", 00:40:14.948 "thin_provision": false, 00:40:14.948 "num_allocated_clusters": 38, 00:40:14.948 "snapshot": false, 00:40:14.948 "clone": false, 00:40:14.948 "esnap_clone": false 00:40:14.948 } 00:40:14.948 } 00:40:14.948 } 00:40:14.948 ] 00:40:14.948 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:14.948 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:14.948 13:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:15.207 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:15.207 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:15.207 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:15.466 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:15.466 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:15.726 [2024-12-16 13:01:41.557725] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:15.726 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:15.726 request: 00:40:15.726 { 00:40:15.726 "uuid": "fc108a5c-6654-46ac-846a-a51ca4a87fc6", 00:40:15.726 "method": "bdev_lvol_get_lvstores", 00:40:15.726 "req_id": 1 00:40:15.726 } 00:40:15.726 Got JSON-RPC error response 00:40:15.726 response: 00:40:15.726 { 00:40:15.726 "code": -19, 00:40:15.726 "message": "No such device" 00:40:15.726 } 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:15.986 aio_bdev 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:15.986 13:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:16.245 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 73b1b7da-e5d4-455d-8617-3dce4b31a458 -t 2000 00:40:16.504 [ 00:40:16.504 { 00:40:16.504 "name": "73b1b7da-e5d4-455d-8617-3dce4b31a458", 00:40:16.504 "aliases": [ 00:40:16.504 "lvs/lvol" 00:40:16.504 ], 00:40:16.504 "product_name": "Logical Volume", 00:40:16.504 "block_size": 4096, 00:40:16.504 "num_blocks": 38912, 00:40:16.504 "uuid": "73b1b7da-e5d4-455d-8617-3dce4b31a458", 00:40:16.504 "assigned_rate_limits": { 00:40:16.504 "rw_ios_per_sec": 0, 00:40:16.504 "rw_mbytes_per_sec": 0, 00:40:16.504 "r_mbytes_per_sec": 0, 00:40:16.504 "w_mbytes_per_sec": 0 00:40:16.504 }, 00:40:16.504 "claimed": false, 00:40:16.504 "zoned": false, 00:40:16.504 "supported_io_types": { 00:40:16.504 "read": true, 00:40:16.504 "write": true, 00:40:16.504 "unmap": true, 00:40:16.504 "flush": false, 00:40:16.504 "reset": true, 00:40:16.504 "nvme_admin": false, 00:40:16.504 "nvme_io": false, 00:40:16.504 "nvme_io_md": false, 00:40:16.504 "write_zeroes": true, 00:40:16.504 "zcopy": false, 00:40:16.504 "get_zone_info": false, 00:40:16.504 "zone_management": false, 00:40:16.504 "zone_append": false, 00:40:16.504 "compare": false, 00:40:16.504 "compare_and_write": false, 00:40:16.504 "abort": false, 00:40:16.504 "seek_hole": true, 00:40:16.504 "seek_data": true, 00:40:16.504 "copy": false, 00:40:16.504 "nvme_iov_md": false 00:40:16.504 }, 00:40:16.504 "driver_specific": { 00:40:16.504 "lvol": { 00:40:16.504 "lvol_store_uuid": "fc108a5c-6654-46ac-846a-a51ca4a87fc6", 00:40:16.504 "base_bdev": "aio_bdev", 00:40:16.504 "thin_provision": false, 00:40:16.504 "num_allocated_clusters": 38, 00:40:16.504 "snapshot": false, 00:40:16.504 "clone": false, 00:40:16.504 "esnap_clone": false 00:40:16.504 } 00:40:16.504 } 00:40:16.504 } 00:40:16.504 ] 00:40:16.504 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:16.504 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:16.504 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:16.763 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:16.763 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:16.763 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:16.763 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:16.763 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 73b1b7da-e5d4-455d-8617-3dce4b31a458 00:40:17.022 13:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc108a5c-6654-46ac-846a-a51ca4a87fc6 00:40:17.281 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:17.541 00:40:17.541 real 0m17.064s 00:40:17.541 user 0m34.515s 00:40:17.541 sys 0m3.833s 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:17.541 ************************************ 00:40:17.541 END TEST lvs_grow_dirty 00:40:17.541 ************************************ 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:17.541 nvmf_trace.0 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:17.541 rmmod nvme_tcp 00:40:17.541 rmmod nvme_fabrics 00:40:17.541 rmmod nvme_keyring 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 622852 ']' 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 622852 00:40:17.541 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 622852 ']' 00:40:17.542 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 622852 00:40:17.542 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:40:17.542 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:17.542 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 622852 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 622852' 00:40:17.801 killing process with pid 622852 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 622852 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 622852 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.801 13:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:20.347 00:40:20.347 real 0m41.716s 00:40:20.347 user 0m52.043s 00:40:20.347 sys 0m10.103s 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:20.347 ************************************ 00:40:20.347 END TEST nvmf_lvs_grow 00:40:20.347 ************************************ 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:20.347 ************************************ 00:40:20.347 START TEST nvmf_bdev_io_wait 00:40:20.347 ************************************ 00:40:20.347 13:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:20.347 * Looking for test storage... 00:40:20.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:20.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.347 --rc genhtml_branch_coverage=1 00:40:20.347 --rc genhtml_function_coverage=1 00:40:20.347 --rc genhtml_legend=1 00:40:20.347 --rc geninfo_all_blocks=1 00:40:20.347 --rc geninfo_unexecuted_blocks=1 00:40:20.347 00:40:20.347 ' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:20.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.347 --rc genhtml_branch_coverage=1 00:40:20.347 --rc genhtml_function_coverage=1 00:40:20.347 --rc genhtml_legend=1 00:40:20.347 --rc geninfo_all_blocks=1 00:40:20.347 --rc geninfo_unexecuted_blocks=1 00:40:20.347 00:40:20.347 ' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:20.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.347 --rc genhtml_branch_coverage=1 00:40:20.347 --rc genhtml_function_coverage=1 00:40:20.347 --rc genhtml_legend=1 00:40:20.347 --rc geninfo_all_blocks=1 00:40:20.347 --rc geninfo_unexecuted_blocks=1 00:40:20.347 00:40:20.347 ' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:20.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.347 --rc genhtml_branch_coverage=1 00:40:20.347 --rc genhtml_function_coverage=1 00:40:20.347 --rc genhtml_legend=1 00:40:20.347 --rc geninfo_all_blocks=1 00:40:20.347 --rc geninfo_unexecuted_blocks=1 00:40:20.347 00:40:20.347 ' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:20.347 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:20.348 13:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:25.618 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:25.878 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:25.878 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:25.878 Found net devices under 0000:af:00.0: cvl_0_0 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:25.878 Found net devices under 0000:af:00.1: cvl_0_1 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:25.878 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:25.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:25.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:40:25.879 00:40:25.879 --- 10.0.0.2 ping statistics --- 00:40:25.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.879 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:25.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:25.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:40:25.879 00:40:25.879 --- 10.0.0.1 ping statistics --- 00:40:25.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.879 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:25.879 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=626817 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 626817 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 626817 ']' 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:26.141 13:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.141 [2024-12-16 13:01:52.013696] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:26.141 [2024-12-16 13:01:52.014589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:26.141 [2024-12-16 13:01:52.014621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.141 [2024-12-16 13:01:52.087469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:26.141 [2024-12-16 13:01:52.128936] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.141 [2024-12-16 13:01:52.128974] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.141 [2024-12-16 13:01:52.128981] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.141 [2024-12-16 13:01:52.128987] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.141 [2024-12-16 13:01:52.128992] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.141 [2024-12-16 13:01:52.129046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.141 [2024-12-16 13:01:52.129156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:26.141 [2024-12-16 13:01:52.129627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:26.141 [2024-12-16 13:01:52.129628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.141 [2024-12-16 13:01:52.129963] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:26.141 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:26.141 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:40:26.141 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:26.141 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:26.141 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 [2024-12-16 13:01:52.269314] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:26.402 [2024-12-16 13:01:52.269954] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:26.402 [2024-12-16 13:01:52.270053] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:26.402 [2024-12-16 13:01:52.270222] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 [2024-12-16 13:01:52.282484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 Malloc0 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.402 [2024-12-16 13:01:52.366538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=627002 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=627005 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:26.402 { 00:40:26.402 "params": { 00:40:26.402 "name": "Nvme$subsystem", 00:40:26.402 "trtype": "$TEST_TRANSPORT", 00:40:26.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:26.402 "adrfam": "ipv4", 00:40:26.402 "trsvcid": "$NVMF_PORT", 00:40:26.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:26.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:26.402 "hdgst": ${hdgst:-false}, 00:40:26.402 "ddgst": ${ddgst:-false} 00:40:26.402 }, 00:40:26.402 "method": "bdev_nvme_attach_controller" 00:40:26.402 } 00:40:26.402 EOF 00:40:26.402 )") 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=627008 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:26.402 { 00:40:26.402 "params": { 00:40:26.402 "name": "Nvme$subsystem", 00:40:26.402 "trtype": "$TEST_TRANSPORT", 00:40:26.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:26.402 "adrfam": "ipv4", 00:40:26.402 "trsvcid": "$NVMF_PORT", 00:40:26.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:26.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:26.402 "hdgst": ${hdgst:-false}, 00:40:26.402 "ddgst": ${ddgst:-false} 00:40:26.402 }, 00:40:26.402 "method": "bdev_nvme_attach_controller" 00:40:26.402 } 00:40:26.402 EOF 00:40:26.402 )") 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=627012 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:26.402 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:26.403 { 00:40:26.403 "params": { 00:40:26.403 "name": "Nvme$subsystem", 00:40:26.403 "trtype": "$TEST_TRANSPORT", 00:40:26.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:26.403 "adrfam": "ipv4", 00:40:26.403 "trsvcid": "$NVMF_PORT", 00:40:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:26.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:26.403 "hdgst": ${hdgst:-false}, 00:40:26.403 "ddgst": ${ddgst:-false} 00:40:26.403 }, 00:40:26.403 "method": "bdev_nvme_attach_controller" 00:40:26.403 } 00:40:26.403 EOF 00:40:26.403 )") 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:26.403 { 00:40:26.403 "params": { 00:40:26.403 "name": "Nvme$subsystem", 00:40:26.403 "trtype": "$TEST_TRANSPORT", 00:40:26.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:26.403 "adrfam": "ipv4", 00:40:26.403 "trsvcid": "$NVMF_PORT", 00:40:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:26.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:26.403 "hdgst": ${hdgst:-false}, 00:40:26.403 "ddgst": ${ddgst:-false} 00:40:26.403 }, 00:40:26.403 "method": "bdev_nvme_attach_controller" 00:40:26.403 } 00:40:26.403 EOF 00:40:26.403 )") 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 627002 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:26.403 "params": { 00:40:26.403 "name": "Nvme1", 00:40:26.403 "trtype": "tcp", 00:40:26.403 "traddr": "10.0.0.2", 00:40:26.403 "adrfam": "ipv4", 00:40:26.403 "trsvcid": "4420", 00:40:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:26.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:26.403 "hdgst": false, 00:40:26.403 "ddgst": false 00:40:26.403 }, 00:40:26.403 "method": "bdev_nvme_attach_controller" 00:40:26.403 }' 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:26.403 "params": { 00:40:26.403 "name": "Nvme1", 00:40:26.403 "trtype": "tcp", 00:40:26.403 "traddr": "10.0.0.2", 00:40:26.403 "adrfam": "ipv4", 00:40:26.403 "trsvcid": "4420", 00:40:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:26.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:26.403 "hdgst": false, 00:40:26.403 "ddgst": false 00:40:26.403 }, 00:40:26.403 "method": "bdev_nvme_attach_controller" 00:40:26.403 }' 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:26.403 "params": { 00:40:26.403 "name": "Nvme1", 00:40:26.403 "trtype": "tcp", 00:40:26.403 "traddr": "10.0.0.2", 00:40:26.403 "adrfam": "ipv4", 00:40:26.403 "trsvcid": "4420", 00:40:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:26.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:26.403 "hdgst": false, 00:40:26.403 "ddgst": false 00:40:26.403 }, 00:40:26.403 "method": "bdev_nvme_attach_controller" 00:40:26.403 }' 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:26.403 13:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:26.403 "params": { 00:40:26.403 "name": "Nvme1", 00:40:26.403 "trtype": "tcp", 00:40:26.403 "traddr": "10.0.0.2", 00:40:26.403 "adrfam": "ipv4", 00:40:26.403 "trsvcid": "4420", 00:40:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:26.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:26.403 "hdgst": false, 00:40:26.403 "ddgst": false 00:40:26.403 }, 00:40:26.403 "method": "bdev_nvme_attach_controller" 00:40:26.403 }' 00:40:26.403 [2024-12-16 13:01:52.419486] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:26.403 [2024-12-16 13:01:52.419533] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:26.403 [2024-12-16 13:01:52.420051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:26.403 [2024-12-16 13:01:52.420051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:26.403 [2024-12-16 13:01:52.420052] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:26.403 [2024-12-16 13:01:52.420107] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-12-16 13:01:52.420108] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-16 13:01:52.420109] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:40:26.403 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:26.403 --proc-type=auto ] 00:40:26.663 [2024-12-16 13:01:52.615900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.663 [2024-12-16 13:01:52.648086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:40:26.663 [2024-12-16 13:01:52.649344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.663 [2024-12-16 13:01:52.675725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:26.922 [2024-12-16 13:01:52.765536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.922 [2024-12-16 13:01:52.795436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:26.922 [2024-12-16 13:01:52.859315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.922 [2024-12-16 13:01:52.893374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:27.180 Running I/O for 1 seconds... 00:40:27.180 Running I/O for 1 seconds... 00:40:27.180 Running I/O for 1 seconds... 00:40:27.439 Running I/O for 1 seconds... 00:40:28.376 12048.00 IOPS, 47.06 MiB/s 00:40:28.376 Latency(us) 00:40:28.376 [2024-12-16T12:01:54.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.376 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:28.376 Nvme1n1 : 1.01 12080.04 47.19 0.00 0.00 10546.08 4837.18 13356.86 00:40:28.376 [2024-12-16T12:01:54.443Z] =================================================================================================================== 00:40:28.376 [2024-12-16T12:01:54.443Z] Total : 12080.04 47.19 0.00 0.00 10546.08 4837.18 13356.86 00:40:28.376 10759.00 IOPS, 42.03 MiB/s 00:40:28.376 Latency(us) 00:40:28.376 [2024-12-16T12:01:54.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.376 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:28.376 Nvme1n1 : 1.01 10836.73 42.33 0.00 0.00 11768.72 4088.20 16227.96 00:40:28.376 [2024-12-16T12:01:54.443Z] =================================================================================================================== 00:40:28.376 [2024-12-16T12:01:54.443Z] Total : 10836.73 42.33 0.00 0.00 11768.72 4088.20 16227.96 00:40:28.376 254144.00 IOPS, 992.75 MiB/s 00:40:28.376 Latency(us) 00:40:28.376 [2024-12-16T12:01:54.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.376 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:28.376 Nvme1n1 : 1.00 253758.17 991.24 0.00 0.00 501.64 233.08 1513.57 00:40:28.376 [2024-12-16T12:01:54.443Z] =================================================================================================================== 00:40:28.376 [2024-12-16T12:01:54.443Z] Total : 253758.17 991.24 0.00 0.00 501.64 233.08 1513.57 00:40:28.376 10451.00 IOPS, 40.82 MiB/s 00:40:28.376 Latency(us) 00:40:28.376 [2024-12-16T12:01:54.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.376 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:28.376 Nvme1n1 : 1.01 10535.02 41.15 0.00 0.00 12118.42 1591.59 19223.89 00:40:28.376 [2024-12-16T12:01:54.443Z] =================================================================================================================== 00:40:28.376 [2024-12-16T12:01:54.443Z] Total : 10535.02 41.15 0.00 0.00 12118.42 1591.59 19223.89 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 627005 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 627008 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 627012 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:28.636 rmmod nvme_tcp 00:40:28.636 rmmod nvme_fabrics 00:40:28.636 rmmod nvme_keyring 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 626817 ']' 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 626817 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 626817 ']' 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 626817 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:28.636 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 626817 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 626817' 00:40:28.895 killing process with pid 626817 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 626817 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 626817 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:28.895 13:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.431 13:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:31.431 00:40:31.431 real 0m11.039s 00:40:31.431 user 0m16.098s 00:40:31.431 sys 0m6.970s 00:40:31.431 13:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:31.431 13:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:31.431 ************************************ 00:40:31.431 END TEST nvmf_bdev_io_wait 00:40:31.431 ************************************ 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:31.431 ************************************ 00:40:31.431 START TEST nvmf_queue_depth 00:40:31.431 ************************************ 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:31.431 * Looking for test storage... 00:40:31.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:31.431 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:31.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.432 --rc genhtml_branch_coverage=1 00:40:31.432 --rc genhtml_function_coverage=1 00:40:31.432 --rc genhtml_legend=1 00:40:31.432 --rc geninfo_all_blocks=1 00:40:31.432 --rc geninfo_unexecuted_blocks=1 00:40:31.432 00:40:31.432 ' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:31.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.432 --rc genhtml_branch_coverage=1 00:40:31.432 --rc genhtml_function_coverage=1 00:40:31.432 --rc genhtml_legend=1 00:40:31.432 --rc geninfo_all_blocks=1 00:40:31.432 --rc geninfo_unexecuted_blocks=1 00:40:31.432 00:40:31.432 ' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:31.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.432 --rc genhtml_branch_coverage=1 00:40:31.432 --rc genhtml_function_coverage=1 00:40:31.432 --rc genhtml_legend=1 00:40:31.432 --rc geninfo_all_blocks=1 00:40:31.432 --rc geninfo_unexecuted_blocks=1 00:40:31.432 00:40:31.432 ' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:31.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.432 --rc genhtml_branch_coverage=1 00:40:31.432 --rc genhtml_function_coverage=1 00:40:31.432 --rc genhtml_legend=1 00:40:31.432 --rc geninfo_all_blocks=1 00:40:31.432 --rc geninfo_unexecuted_blocks=1 00:40:31.432 00:40:31.432 ' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:31.432 13:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.709 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:36.710 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:36.710 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:36.710 Found net devices under 0000:af:00.0: cvl_0_0 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:36.710 Found net devices under 0000:af:00.1: cvl_0_1 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:36.710 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:36.969 13:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:36.969 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:36.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:36.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:40:36.970 00:40:36.970 --- 10.0.0.2 ping statistics --- 00:40:36.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:36.970 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:36.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:36.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:40:36.970 00:40:36.970 --- 10.0.0.1 ping statistics --- 00:40:36.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:36.970 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:36.970 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=630768 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 630768 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 630768 ']' 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:37.229 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.229 [2024-12-16 13:02:03.117148] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:37.229 [2024-12-16 13:02:03.118081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:37.229 [2024-12-16 13:02:03.118126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:37.229 [2024-12-16 13:02:03.192606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.229 [2024-12-16 13:02:03.231533] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.229 [2024-12-16 13:02:03.231571] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.229 [2024-12-16 13:02:03.231579] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.229 [2024-12-16 13:02:03.231585] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.229 [2024-12-16 13:02:03.231590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.229 [2024-12-16 13:02:03.231608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:37.229 [2024-12-16 13:02:03.292261] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:37.229 [2024-12-16 13:02:03.292471] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.488 [2024-12-16 13:02:03.360286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.488 Malloc0 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.488 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.489 [2024-12-16 13:02:03.436429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=630787 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 630787 /var/tmp/bdevperf.sock 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 630787 ']' 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:37.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:37.489 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.489 [2024-12-16 13:02:03.488991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:37.489 [2024-12-16 13:02:03.489038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630787 ] 00:40:37.748 [2024-12-16 13:02:03.558412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.748 [2024-12-16 13:02:03.597197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:37.748 NVMe0n1 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.748 13:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:38.007 Running I/O for 10 seconds... 00:40:39.878 12114.00 IOPS, 47.32 MiB/s [2024-12-16T12:02:06.882Z] 12293.50 IOPS, 48.02 MiB/s [2024-12-16T12:02:08.259Z] 12385.33 IOPS, 48.38 MiB/s [2024-12-16T12:02:09.195Z] 12526.75 IOPS, 48.93 MiB/s [2024-12-16T12:02:10.132Z] 12500.00 IOPS, 48.83 MiB/s [2024-12-16T12:02:11.069Z] 12581.83 IOPS, 49.15 MiB/s [2024-12-16T12:02:12.007Z] 12584.29 IOPS, 49.16 MiB/s [2024-12-16T12:02:12.942Z] 12613.62 IOPS, 49.27 MiB/s [2024-12-16T12:02:14.319Z] 12633.33 IOPS, 49.35 MiB/s [2024-12-16T12:02:14.319Z] 12667.10 IOPS, 49.48 MiB/s 00:40:48.252 Latency(us) 00:40:48.252 [2024-12-16T12:02:14.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:48.252 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:48.252 Verification LBA range: start 0x0 length 0x4000 00:40:48.252 NVMe0n1 : 10.06 12682.32 49.54 0.00 0.00 80438.38 19099.06 51430.16 00:40:48.252 [2024-12-16T12:02:14.319Z] =================================================================================================================== 00:40:48.252 [2024-12-16T12:02:14.319Z] Total : 12682.32 49.54 0.00 0.00 80438.38 19099.06 51430.16 00:40:48.252 { 00:40:48.252 "results": [ 00:40:48.252 { 00:40:48.252 "job": "NVMe0n1", 00:40:48.252 "core_mask": "0x1", 00:40:48.252 "workload": "verify", 00:40:48.252 "status": "finished", 00:40:48.252 "verify_range": { 00:40:48.252 "start": 0, 00:40:48.252 "length": 16384 00:40:48.252 }, 00:40:48.252 "queue_depth": 1024, 00:40:48.252 "io_size": 4096, 00:40:48.252 "runtime": 10.061251, 00:40:48.252 "iops": 12682.319524679386, 00:40:48.252 "mibps": 49.54031064327885, 00:40:48.252 "io_failed": 0, 00:40:48.252 "io_timeout": 0, 00:40:48.252 "avg_latency_us": 80438.37719731302, 00:40:48.252 "min_latency_us": 19099.062857142857, 00:40:48.252 "max_latency_us": 51430.15619047619 00:40:48.252 } 00:40:48.252 ], 00:40:48.252 "core_count": 1 00:40:48.252 } 00:40:48.252 13:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 630787 00:40:48.252 13:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 630787 ']' 00:40:48.252 13:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 630787 00:40:48.252 13:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:40:48.252 13:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:48.252 13:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 630787 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 630787' 00:40:48.252 killing process with pid 630787 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 630787 00:40:48.252 Received shutdown signal, test time was about 10.000000 seconds 00:40:48.252 00:40:48.252 Latency(us) 00:40:48.252 [2024-12-16T12:02:14.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:48.252 [2024-12-16T12:02:14.319Z] =================================================================================================================== 00:40:48.252 [2024-12-16T12:02:14.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 630787 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:48.252 rmmod nvme_tcp 00:40:48.252 rmmod nvme_fabrics 00:40:48.252 rmmod nvme_keyring 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 630768 ']' 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 630768 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 630768 ']' 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 630768 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 630768 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 630768' 00:40:48.252 killing process with pid 630768 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 630768 00:40:48.252 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 630768 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:48.512 13:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:51.052 00:40:51.052 real 0m19.568s 00:40:51.052 user 0m22.563s 00:40:51.052 sys 0m6.253s 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:51.052 ************************************ 00:40:51.052 END TEST nvmf_queue_depth 00:40:51.052 ************************************ 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:51.052 ************************************ 00:40:51.052 START TEST nvmf_target_multipath 00:40:51.052 ************************************ 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:51.052 * Looking for test storage... 00:40:51.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.052 --rc genhtml_branch_coverage=1 00:40:51.052 --rc genhtml_function_coverage=1 00:40:51.052 --rc genhtml_legend=1 00:40:51.052 --rc geninfo_all_blocks=1 00:40:51.052 --rc geninfo_unexecuted_blocks=1 00:40:51.052 00:40:51.052 ' 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.052 --rc genhtml_branch_coverage=1 00:40:51.052 --rc genhtml_function_coverage=1 00:40:51.052 --rc genhtml_legend=1 00:40:51.052 --rc geninfo_all_blocks=1 00:40:51.052 --rc geninfo_unexecuted_blocks=1 00:40:51.052 00:40:51.052 ' 00:40:51.052 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.052 --rc genhtml_branch_coverage=1 00:40:51.052 --rc genhtml_function_coverage=1 00:40:51.052 --rc genhtml_legend=1 00:40:51.052 --rc geninfo_all_blocks=1 00:40:51.053 --rc geninfo_unexecuted_blocks=1 00:40:51.053 00:40:51.053 ' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:51.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.053 --rc genhtml_branch_coverage=1 00:40:51.053 --rc genhtml_function_coverage=1 00:40:51.053 --rc genhtml_legend=1 00:40:51.053 --rc geninfo_all_blocks=1 00:40:51.053 --rc geninfo_unexecuted_blocks=1 00:40:51.053 00:40:51.053 ' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:51.053 13:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:56.329 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:56.329 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:56.329 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:56.330 Found net devices under 0000:af:00.0: cvl_0_0 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:56.330 Found net devices under 0000:af:00.1: cvl_0_1 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:56.330 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:56.589 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:56.849 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:56.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:56.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:40:56.849 00:40:56.849 --- 10.0.0.2 ping statistics --- 00:40:56.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:56.849 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:40:56.849 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:56.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:56.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:40:56.849 00:40:56.849 --- 10.0.0.1 ping statistics --- 00:40:56.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:56.850 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:56.850 only one NIC for nvmf test 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:56.850 rmmod nvme_tcp 00:40:56.850 rmmod nvme_fabrics 00:40:56.850 rmmod nvme_keyring 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:56.850 13:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:59.389 00:40:59.389 real 0m8.246s 00:40:59.389 user 0m1.755s 00:40:59.389 sys 0m4.453s 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:59.389 ************************************ 00:40:59.389 END TEST nvmf_target_multipath 00:40:59.389 ************************************ 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:59.389 ************************************ 00:40:59.389 START TEST nvmf_zcopy 00:40:59.389 ************************************ 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:59.389 * Looking for test storage... 00:40:59.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:59.389 13:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:59.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.389 --rc genhtml_branch_coverage=1 00:40:59.389 --rc genhtml_function_coverage=1 00:40:59.389 --rc genhtml_legend=1 00:40:59.389 --rc geninfo_all_blocks=1 00:40:59.389 --rc geninfo_unexecuted_blocks=1 00:40:59.389 00:40:59.389 ' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:59.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.389 --rc genhtml_branch_coverage=1 00:40:59.389 --rc genhtml_function_coverage=1 00:40:59.389 --rc genhtml_legend=1 00:40:59.389 --rc geninfo_all_blocks=1 00:40:59.389 --rc geninfo_unexecuted_blocks=1 00:40:59.389 00:40:59.389 ' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:59.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.389 --rc genhtml_branch_coverage=1 00:40:59.389 --rc genhtml_function_coverage=1 00:40:59.389 --rc genhtml_legend=1 00:40:59.389 --rc geninfo_all_blocks=1 00:40:59.389 --rc geninfo_unexecuted_blocks=1 00:40:59.389 00:40:59.389 ' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:59.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.389 --rc genhtml_branch_coverage=1 00:40:59.389 --rc genhtml_function_coverage=1 00:40:59.389 --rc genhtml_legend=1 00:40:59.389 --rc geninfo_all_blocks=1 00:40:59.389 --rc geninfo_unexecuted_blocks=1 00:40:59.389 00:40:59.389 ' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:59.389 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:59.390 13:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:04.666 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:04.666 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:04.666 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:04.667 Found net devices under 0000:af:00.0: cvl_0_0 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:04.667 Found net devices under 0000:af:00.1: cvl_0_1 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.667 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:04.926 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:04.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:41:04.926 00:41:04.926 --- 10.0.0.2 ping statistics --- 00:41:04.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.927 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:41:04.927 00:41:04.927 --- 10.0.0.1 ping statistics --- 00:41:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.927 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=639262 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 639262 00:41:04.927 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 639262 ']' 00:41:05.186 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.186 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:05.186 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.186 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:05.186 13:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.186 [2024-12-16 13:02:31.036750] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:05.186 [2024-12-16 13:02:31.037708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:05.186 [2024-12-16 13:02:31.037744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:05.186 [2024-12-16 13:02:31.109074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.186 [2024-12-16 13:02:31.146369] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:05.186 [2024-12-16 13:02:31.146407] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:05.186 [2024-12-16 13:02:31.146414] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:05.186 [2024-12-16 13:02:31.146419] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:05.186 [2024-12-16 13:02:31.146425] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:05.186 [2024-12-16 13:02:31.146447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:05.186 [2024-12-16 13:02:31.206488] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:05.186 [2024-12-16 13:02:31.206691] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:05.186 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:05.186 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:41:05.186 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:05.186 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:05.186 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 [2024-12-16 13:02:31.287092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 [2024-12-16 13:02:31.315393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 malloc0 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:05.446 { 00:41:05.446 "params": { 00:41:05.446 "name": "Nvme$subsystem", 00:41:05.446 "trtype": "$TEST_TRANSPORT", 00:41:05.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:05.446 "adrfam": "ipv4", 00:41:05.446 "trsvcid": "$NVMF_PORT", 00:41:05.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:05.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:05.446 "hdgst": ${hdgst:-false}, 00:41:05.446 "ddgst": ${ddgst:-false} 00:41:05.446 }, 00:41:05.446 "method": "bdev_nvme_attach_controller" 00:41:05.446 } 00:41:05.446 EOF 00:41:05.446 )") 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:41:05.446 13:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:05.446 "params": { 00:41:05.446 "name": "Nvme1", 00:41:05.446 "trtype": "tcp", 00:41:05.446 "traddr": "10.0.0.2", 00:41:05.446 "adrfam": "ipv4", 00:41:05.446 "trsvcid": "4420", 00:41:05.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:05.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:05.447 "hdgst": false, 00:41:05.447 "ddgst": false 00:41:05.447 }, 00:41:05.447 "method": "bdev_nvme_attach_controller" 00:41:05.447 }' 00:41:05.447 [2024-12-16 13:02:31.413459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:05.447 [2024-12-16 13:02:31.413501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639293 ] 00:41:05.447 [2024-12-16 13:02:31.481335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.706 [2024-12-16 13:02:31.519817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.706 Running I/O for 10 seconds... 00:41:08.022 8367.00 IOPS, 65.37 MiB/s [2024-12-16T12:02:35.026Z] 8411.50 IOPS, 65.71 MiB/s [2024-12-16T12:02:35.967Z] 8447.67 IOPS, 66.00 MiB/s [2024-12-16T12:02:36.904Z] 8459.50 IOPS, 66.09 MiB/s [2024-12-16T12:02:37.841Z] 8460.20 IOPS, 66.10 MiB/s [2024-12-16T12:02:38.779Z] 8464.17 IOPS, 66.13 MiB/s [2024-12-16T12:02:39.718Z] 8470.71 IOPS, 66.18 MiB/s [2024-12-16T12:02:41.096Z] 8477.88 IOPS, 66.23 MiB/s [2024-12-16T12:02:42.034Z] 8467.78 IOPS, 66.15 MiB/s [2024-12-16T12:02:42.034Z] 8471.90 IOPS, 66.19 MiB/s 00:41:15.967 Latency(us) 00:41:15.967 [2024-12-16T12:02:42.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.967 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:15.967 Verification LBA range: start 0x0 length 0x1000 00:41:15.967 Nvme1n1 : 10.01 8472.35 66.19 0.00 0.00 15065.78 321.83 21096.35 00:41:15.967 [2024-12-16T12:02:42.034Z] =================================================================================================================== 00:41:15.967 [2024-12-16T12:02:42.034Z] Total : 8472.35 66.19 0.00 0.00 15065.78 321.83 21096.35 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=640844 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:15.967 { 00:41:15.967 "params": { 00:41:15.967 "name": "Nvme$subsystem", 00:41:15.967 "trtype": "$TEST_TRANSPORT", 00:41:15.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.967 "adrfam": "ipv4", 00:41:15.967 "trsvcid": "$NVMF_PORT", 00:41:15.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.967 "hdgst": ${hdgst:-false}, 00:41:15.967 "ddgst": ${ddgst:-false} 00:41:15.967 }, 00:41:15.967 "method": "bdev_nvme_attach_controller" 00:41:15.967 } 00:41:15.967 EOF 00:41:15.967 )") 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:41:15.967 [2024-12-16 13:02:41.878765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.878799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:41:15.967 13:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:15.967 "params": { 00:41:15.967 "name": "Nvme1", 00:41:15.967 "trtype": "tcp", 00:41:15.967 "traddr": "10.0.0.2", 00:41:15.967 "adrfam": "ipv4", 00:41:15.967 "trsvcid": "4420", 00:41:15.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:15.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:15.967 "hdgst": false, 00:41:15.967 "ddgst": false 00:41:15.967 }, 00:41:15.967 "method": "bdev_nvme_attach_controller" 00:41:15.967 }' 00:41:15.967 [2024-12-16 13:02:41.890731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.890745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.902723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.902734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.914722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.914733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.919286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:15.967 [2024-12-16 13:02:41.919328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640844 ] 00:41:15.967 [2024-12-16 13:02:41.926725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.926737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.938723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.938734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.950727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.950740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.962724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.962734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.974726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.974736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.986724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.986735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:41.988551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.967 [2024-12-16 13:02:41.998725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:41.998739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:42.010736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:42.010763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:42.022728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.967 [2024-12-16 13:02:42.022741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.967 [2024-12-16 13:02:42.027599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.227 [2024-12-16 13:02:42.034729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.034742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.046740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.046763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.058730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.058752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.070729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.070744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.082729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.082743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.094728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.094741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.106740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.106763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.118727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.118742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.130732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.130747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.142730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.142744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.154725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.154735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.166725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.166736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.178721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.178732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.190727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.190742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.202724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.202734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.214727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.214740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.226728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.226739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.238724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.238736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.250724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.250734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.262724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.262734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.274725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.274737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.227 [2024-12-16 13:02:42.286730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.227 [2024-12-16 13:02:42.286748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.486 Running I/O for 5 seconds... 00:41:16.486 [2024-12-16 13:02:42.303945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.303965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.319066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.319087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.331862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.331881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.346891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.346910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.357722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.357741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.371732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.371752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.386783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.386802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.398313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.398332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.412094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.412118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.427335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.427353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.442477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.442496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.454062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.454081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.468678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.468697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.483641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.483659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.494653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.494673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.508144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.508164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.522816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.522834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.533654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.533672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.487 [2024-12-16 13:02:42.548412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.487 [2024-12-16 13:02:42.548431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.558325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.558344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.571557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.571575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.587004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.587023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.598392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.598412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.612224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.612244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.626620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.626645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.637897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.637916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.652596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.652615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.667182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.667201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.678496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.678514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.692360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.692379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.706718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.706737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.717763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.717781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.731972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.746 [2024-12-16 13:02:42.731991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.746 [2024-12-16 13:02:42.746660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.747 [2024-12-16 13:02:42.746681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.747 [2024-12-16 13:02:42.757913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.747 [2024-12-16 13:02:42.757932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.747 [2024-12-16 13:02:42.772288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.747 [2024-12-16 13:02:42.772306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.747 [2024-12-16 13:02:42.786812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.747 [2024-12-16 13:02:42.786835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.747 [2024-12-16 13:02:42.797882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.747 [2024-12-16 13:02:42.797901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.812211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.812230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.827019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.827038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.838370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.838389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.851591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.851610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.867163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.867182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.878669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.878688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.891496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.891515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.907036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.907058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.920096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.920121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.934621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.934640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.946313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.946332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.960038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.960056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.974734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.974753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:42.986082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:42.986101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.006 [2024-12-16 13:02:43.000040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.006 [2024-12-16 13:02:43.000060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.007 [2024-12-16 13:02:43.014609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.007 [2024-12-16 13:02:43.014629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.007 [2024-12-16 13:02:43.025856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.007 [2024-12-16 13:02:43.025875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.007 [2024-12-16 13:02:43.040006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.007 [2024-12-16 13:02:43.040029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.007 [2024-12-16 13:02:43.054879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.007 [2024-12-16 13:02:43.054898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.007 [2024-12-16 13:02:43.066022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.007 [2024-12-16 13:02:43.066041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.079632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.079651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.094902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.094921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.106339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.106359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.119861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.119879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.134992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.135010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.146197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.146216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.159684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.159703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.174571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.174590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.186125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.186160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.199543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.199561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.215074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.215092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.226809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.226828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.238868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.238887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.252285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.252303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.267041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.267059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.278179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.278198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.292405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.292430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 16483.00 IOPS, 128.77 MiB/s [2024-12-16T12:02:43.333Z] [2024-12-16 13:02:43.306767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.306785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.266 [2024-12-16 13:02:43.318181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.266 [2024-12-16 13:02:43.318200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.332052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.332072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.347024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.347043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.358237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.358256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.372466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.372486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.382797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.382817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.395745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.395765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.406592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.406611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.419630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.419651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.434209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.434228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.446110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.446137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.460889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.460908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.475802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.475822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.490512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.490533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.502686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.502706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.515916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.515935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.530716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.530736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.542319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.542344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.556504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.556523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.571405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.571424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.526 [2024-12-16 13:02:43.582451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.526 [2024-12-16 13:02:43.582473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.596296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.596317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.610960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.610977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.622468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.622486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.635760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.635778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.650762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.650781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.662098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.662125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.675784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.675803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.690850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.690868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.706440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.706458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.718956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.718973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.730052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.785 [2024-12-16 13:02:43.730070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.785 [2024-12-16 13:02:43.743628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.743646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.758720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.758739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.769761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.769779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.784100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.784127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.799010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.799028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.810199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.810218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.824447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.824466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.839131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.839149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.786 [2024-12-16 13:02:43.850419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.786 [2024-12-16 13:02:43.850438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.864055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.864074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.874578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.874596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.887762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.887780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.902912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.902929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.914155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.914173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.927892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.927909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.942907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.942925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.954154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.954172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.968100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.968125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.983075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.983092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:43.999015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:43.999033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.014675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.014693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.025696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.025714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.039878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.039895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.055508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.055526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.066761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.066779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.079271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.079288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.094665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.094683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.105579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.105597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.075 [2024-12-16 13:02:44.120050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.075 [2024-12-16 13:02:44.120067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.135210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.135228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.150600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.150620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.161917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.161935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.175976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.175994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.190970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.190988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.206004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.206023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.219810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.219829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.234615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.234633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.246057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.246076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.260183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.260202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.381 [2024-12-16 13:02:44.274893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.381 [2024-12-16 13:02:44.274911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.285946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.285964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.300314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.300332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 16422.50 IOPS, 128.30 MiB/s [2024-12-16T12:02:44.449Z] [2024-12-16 13:02:44.314484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.314502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.325999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.326017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.340281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.340299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.354466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.354484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.365833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.365851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.379548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.379566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.394763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.394780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.406426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.406444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.419882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.419900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.382 [2024-12-16 13:02:44.434635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.382 [2024-12-16 13:02:44.434654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.446077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.446097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.459358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.459376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.472618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.472635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.487683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.487700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.502225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.502243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.515584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.515601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.530319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.530337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.541790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.541808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.555230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.555264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.568176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.568193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.582845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.582863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.594032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.594050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.606793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.606811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.619171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.619189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.632132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.632149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.646222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.646240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.658862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.658881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.670413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.670433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.684826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.684845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.699642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.699660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.655 [2024-12-16 13:02:44.714469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.655 [2024-12-16 13:02:44.714489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.725584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.725604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.739840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.739858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.754699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.754717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.766251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.766269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.780562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.780579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.795228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.795246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.810382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.810405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.824160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.824179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.838682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.838701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.849727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.849747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.863893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.863912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.878926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.878944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.890476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.890493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.903872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.903891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.918145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.918164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.932167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.932186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.946846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.946865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.957874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.957893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.972305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.972324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.934 [2024-12-16 13:02:44.986947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.934 [2024-12-16 13:02:44.986966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.002445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.002465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.014888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.014907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.026757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.026775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.040122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.040141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.055330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.055348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.071047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.071071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.086370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.086390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.097994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.098013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.112310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.112329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.127327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.127346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.142505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.142524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.155461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.155479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.166775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.166793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.179946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.179965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.195059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.195077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.208066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.208085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.222614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.222633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.234173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.234192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.247948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.247967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.259265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.259283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.272075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.272094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.227 [2024-12-16 13:02:45.286793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.227 [2024-12-16 13:02:45.286811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.298088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.298108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 16438.67 IOPS, 128.43 MiB/s [2024-12-16T12:02:45.553Z] [2024-12-16 13:02:45.312428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.312445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.326459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.326477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.337874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.337892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.351998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.352016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.366614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.366632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.378094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.378120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.392537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.392556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.406983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.407000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.418383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.418401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.432215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.432233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.446940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.446958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.457926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.457945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.472156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.472173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.486865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.486883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.498614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.498632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.509760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.509778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.524026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.524044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.538930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.538947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.486 [2024-12-16 13:02:45.550294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.486 [2024-12-16 13:02:45.550312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.564060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.564078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.578825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.578842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.589946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.589965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.604205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.604223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.618501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.618519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.631061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.631078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.643603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.643620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.658409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.658427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.670157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.670174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.684332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.684350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.698890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.698908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.710421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.710439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.723748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.723767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.738750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.738768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.750050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.750068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.764296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.764314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.778811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.778829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.790193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.790211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.746 [2024-12-16 13:02:45.803981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.746 [2024-12-16 13:02:45.804000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.818795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.818814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.829700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.829718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.843308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.843325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.859038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.859057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.871816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.871833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.883148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.883165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.898694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.898712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.909648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.909667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.924372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.924391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.938640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.938658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.949985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.950003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.964288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.964306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.979196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.979215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:45.994704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:45.994722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.005 [2024-12-16 13:02:46.006476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.005 [2024-12-16 13:02:46.006495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.006 [2024-12-16 13:02:46.020018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.006 [2024-12-16 13:02:46.020037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.006 [2024-12-16 13:02:46.034658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.006 [2024-12-16 13:02:46.034678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.006 [2024-12-16 13:02:46.045705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.006 [2024-12-16 13:02:46.045723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.006 [2024-12-16 13:02:46.059640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.006 [2024-12-16 13:02:46.059659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.074723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.074746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.085931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.085950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.099305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.099323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.110518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.110536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.124320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.124338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.139253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.139271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.154541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.154559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.165763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.165781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.180064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.180081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.194604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.194622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.206241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.206259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.219987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.220004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.234774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.234793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.245867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.245886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.260554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.260573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.275203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.275221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.288167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.288186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.303251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.303270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 16441.25 IOPS, 128.45 MiB/s [2024-12-16T12:02:46.332Z] [2024-12-16 13:02:46.314414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.314433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.265 [2024-12-16 13:02:46.328742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.265 [2024-12-16 13:02:46.328768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.524 [2024-12-16 13:02:46.343338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.524 [2024-12-16 13:02:46.343357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.524 [2024-12-16 13:02:46.358103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.524 [2024-12-16 13:02:46.358128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.524 [2024-12-16 13:02:46.371058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.524 [2024-12-16 13:02:46.371075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.382609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.382627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.396965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.396982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.411812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.411831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.426776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.426795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.437958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.437976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.451460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.451478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.466661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.466682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.478026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.478044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.492242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.492261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.506872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.506891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.518244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.518263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.531108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.531134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.546953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.546971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.562869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.562887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.525 [2024-12-16 13:02:46.579248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.525 [2024-12-16 13:02:46.579267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.590208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.590231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.603661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.603679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.618931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.618950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.630103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.630127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.643835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.643853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.659032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.659050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.674410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.674428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.686656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.686675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.698594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.698613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.711242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.711260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.726543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.726563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.737857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.737877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.751751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.751768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.766415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.766434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.779553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.779570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.794519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.794536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.806045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.806063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.820236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.820254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.835072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.835089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-16 13:02:46.847970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-16 13:02:46.847988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.862518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.862538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.874146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.874163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.888637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.888655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.903620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.903637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.914531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.914549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.928350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.928368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.943318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.943335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.954197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.954216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.968488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.968506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.983341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.983358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:46.994421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:46.994438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.008018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.008036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.022693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.022711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.034884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.034901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.048272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.048300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.063346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.063364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.078374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.078392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.089596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.089614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.043 [2024-12-16 13:02:47.103538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.043 [2024-12-16 13:02:47.103556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.119068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.119087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.130923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.130940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.143550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.143567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.159292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.159310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.174691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.174710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.186617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.186635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.200269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.200287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.214671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.214688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.226301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.226320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.240100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.240125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.250351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.250379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.263937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.263955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.279058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.279076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.292435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.292453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.307393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.307412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 16450.20 IOPS, 128.52 MiB/s 00:41:21.302 Latency(us) 00:41:21.302 [2024-12-16T12:02:47.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:21.302 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:21.302 Nvme1n1 : 5.01 16451.51 128.53 0.00 0.00 7773.11 2028.50 12919.95 00:41:21.302 [2024-12-16T12:02:47.369Z] =================================================================================================================== 00:41:21.302 [2024-12-16T12:02:47.369Z] Total : 16451.51 128.53 0.00 0.00 7773.11 2028.50 12919.95 00:41:21.302 [2024-12-16 13:02:47.318730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.318747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.330728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.330743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.342741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.342758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.354730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.354746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.302 [2024-12-16 13:02:47.366734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.302 [2024-12-16 13:02:47.366748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.378725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.378739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.390728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.390742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.402725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.402737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.414724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.414737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.426722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.426732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.438726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.438738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.450724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.450735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.462724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.462735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.561 [2024-12-16 13:02:47.474724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.561 [2024-12-16 13:02:47.474737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.562 [2024-12-16 13:02:47.486727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.562 [2024-12-16 13:02:47.486735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (640844) - No such process 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 640844 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:21.562 delay0 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.562 13:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:21.562 [2024-12-16 13:02:47.590020] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:28.130 Initializing NVMe Controllers 00:41:28.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:28.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:28.130 Initialization complete. Launching workers. 00:41:28.130 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 19625 00:41:28.130 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19786, failed to submit 102 00:41:28.130 success 19699, unsuccessful 87, failed 0 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:28.130 rmmod nvme_tcp 00:41:28.130 rmmod nvme_fabrics 00:41:28.130 rmmod nvme_keyring 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 639262 ']' 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 639262 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 639262 ']' 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 639262 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:28.130 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 639262 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 639262' 00:41:28.389 killing process with pid 639262 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 639262 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 639262 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:28.389 13:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:30.925 00:41:30.925 real 0m31.581s 00:41:30.925 user 0m40.490s 00:41:30.925 sys 0m12.800s 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:30.925 ************************************ 00:41:30.925 END TEST nvmf_zcopy 00:41:30.925 ************************************ 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:30.925 ************************************ 00:41:30.925 START TEST nvmf_nmic 00:41:30.925 ************************************ 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:30.925 * Looking for test storage... 00:41:30.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:30.925 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.926 --rc genhtml_branch_coverage=1 00:41:30.926 --rc genhtml_function_coverage=1 00:41:30.926 --rc genhtml_legend=1 00:41:30.926 --rc geninfo_all_blocks=1 00:41:30.926 --rc geninfo_unexecuted_blocks=1 00:41:30.926 00:41:30.926 ' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.926 --rc genhtml_branch_coverage=1 00:41:30.926 --rc genhtml_function_coverage=1 00:41:30.926 --rc genhtml_legend=1 00:41:30.926 --rc geninfo_all_blocks=1 00:41:30.926 --rc geninfo_unexecuted_blocks=1 00:41:30.926 00:41:30.926 ' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.926 --rc genhtml_branch_coverage=1 00:41:30.926 --rc genhtml_function_coverage=1 00:41:30.926 --rc genhtml_legend=1 00:41:30.926 --rc geninfo_all_blocks=1 00:41:30.926 --rc geninfo_unexecuted_blocks=1 00:41:30.926 00:41:30.926 ' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.926 --rc genhtml_branch_coverage=1 00:41:30.926 --rc genhtml_function_coverage=1 00:41:30.926 --rc genhtml_legend=1 00:41:30.926 --rc geninfo_all_blocks=1 00:41:30.926 --rc geninfo_unexecuted_blocks=1 00:41:30.926 00:41:30.926 ' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:30.926 13:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:37.497 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:37.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:37.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:37.498 Found net devices under 0000:af:00.0: cvl_0_0 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:37.498 Found net devices under 0000:af:00.1: cvl_0_1 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:37.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:37.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:41:37.498 00:41:37.498 --- 10.0.0.2 ping statistics --- 00:41:37.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.498 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:37.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:37.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:41:37.498 00:41:37.498 --- 10.0.0.1 ping statistics --- 00:41:37.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.498 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=646385 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 646385 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 646385 ']' 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:37.498 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:37.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 [2024-12-16 13:03:02.772756] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:37.499 [2024-12-16 13:03:02.773655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:37.499 [2024-12-16 13:03:02.773689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:37.499 [2024-12-16 13:03:02.848207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:37.499 [2024-12-16 13:03:02.891036] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:37.499 [2024-12-16 13:03:02.891074] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:37.499 [2024-12-16 13:03:02.891082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:37.499 [2024-12-16 13:03:02.891087] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:37.499 [2024-12-16 13:03:02.891092] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:37.499 [2024-12-16 13:03:02.891158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:37.499 [2024-12-16 13:03:02.891652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:37.499 [2024-12-16 13:03:02.891735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.499 [2024-12-16 13:03:02.891736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:37.499 [2024-12-16 13:03:02.967160] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:37.499 [2024-12-16 13:03:02.967547] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:37.499 [2024-12-16 13:03:02.967852] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:37.499 [2024-12-16 13:03:02.968279] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:37.499 [2024-12-16 13:03:02.968828] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:37.499 13:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 [2024-12-16 13:03:03.036469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 Malloc0 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 [2024-12-16 13:03:03.108683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:37.499 test case1: single bdev can't be used in multiple subsystems 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 [2024-12-16 13:03:03.140200] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:37.499 [2024-12-16 13:03:03.140223] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:37.499 [2024-12-16 13:03:03.140231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:37.499 request: 00:41:37.499 { 00:41:37.499 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:37.499 "namespace": { 00:41:37.499 "bdev_name": "Malloc0", 00:41:37.499 "no_auto_visible": false 00:41:37.499 }, 00:41:37.499 "method": "nvmf_subsystem_add_ns", 00:41:37.499 "req_id": 1 00:41:37.499 } 00:41:37.499 Got JSON-RPC error response 00:41:37.499 response: 00:41:37.499 { 00:41:37.499 "code": -32602, 00:41:37.499 "message": "Invalid parameters" 00:41:37.499 } 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:37.499 Adding namespace failed - expected result. 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:37.499 test case2: host connect to nvmf target in multiple paths 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.499 [2024-12-16 13:03:03.152305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:37.499 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:37.758 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:37.758 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:41:37.759 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:37.759 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:41:37.759 13:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:41:39.670 13:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:39.670 [global] 00:41:39.670 thread=1 00:41:39.670 invalidate=1 00:41:39.670 rw=write 00:41:39.670 time_based=1 00:41:39.670 runtime=1 00:41:39.670 ioengine=libaio 00:41:39.670 direct=1 00:41:39.670 bs=4096 00:41:39.670 iodepth=1 00:41:39.670 norandommap=0 00:41:39.670 numjobs=1 00:41:39.670 00:41:39.670 verify_dump=1 00:41:39.670 verify_backlog=512 00:41:39.670 verify_state_save=0 00:41:39.670 do_verify=1 00:41:39.670 verify=crc32c-intel 00:41:39.670 [job0] 00:41:39.670 filename=/dev/nvme0n1 00:41:39.670 Could not set queue depth (nvme0n1) 00:41:39.929 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:39.929 fio-3.35 00:41:39.929 Starting 1 thread 00:41:41.306 00:41:41.307 job0: (groupid=0, jobs=1): err= 0: pid=647018: Mon Dec 16 13:03:07 2024 00:41:41.307 read: IOPS=2259, BW=9039KiB/s (9256kB/s)(9048KiB/1001msec) 00:41:41.307 slat (nsec): min=7098, max=43242, avg=8309.39, stdev=1838.86 00:41:41.307 clat (usec): min=184, max=40680, avg=241.96, stdev=851.12 00:41:41.307 lat (usec): min=196, max=40690, avg=250.27, stdev=851.16 00:41:41.307 clat percentiles (usec): 00:41:41.307 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:41:41.307 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:41:41.307 | 70.00th=[ 225], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:41:41.307 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 396], 99.95th=[ 416], 00:41:41.307 | 99.99th=[40633] 00:41:41.307 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:41.307 slat (usec): min=10, max=27607, avg=22.78, stdev=545.41 00:41:41.307 clat (usec): min=115, max=326, avg=141.16, stdev= 9.36 00:41:41.307 lat (usec): min=139, max=27897, avg=163.94, stdev=548.43 00:41:41.307 clat percentiles (usec): 00:41:41.307 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:41:41.307 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 141], 00:41:41.307 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 153], 00:41:41.307 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 260], 99.95th=[ 289], 00:41:41.307 | 99.99th=[ 326] 00:41:41.307 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:41:41.307 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:41.307 lat (usec) : 250=89.20%, 500=10.78% 00:41:41.307 lat (msec) : 50=0.02% 00:41:41.307 cpu : usr=4.40%, sys=7.30%, ctx=4825, majf=0, minf=1 00:41:41.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.307 issued rwts: total=2262,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:41.307 00:41:41.307 Run status group 0 (all jobs): 00:41:41.307 READ: bw=9039KiB/s (9256kB/s), 9039KiB/s-9039KiB/s (9256kB/s-9256kB/s), io=9048KiB (9265kB), run=1001-1001msec 00:41:41.307 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:41:41.307 00:41:41.307 Disk stats (read/write): 00:41:41.307 nvme0n1: ios=2074/2211, merge=0/0, ticks=1470/284, in_queue=1754, util=98.60% 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:41.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:41.307 rmmod nvme_tcp 00:41:41.307 rmmod nvme_fabrics 00:41:41.307 rmmod nvme_keyring 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 646385 ']' 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 646385 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 646385 ']' 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 646385 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:41.307 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 646385 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 646385' 00:41:41.566 killing process with pid 646385 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 646385 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 646385 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:41.566 13:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:44.103 00:41:44.103 real 0m13.139s 00:41:44.103 user 0m24.382s 00:41:44.103 sys 0m6.133s 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:44.103 ************************************ 00:41:44.103 END TEST nvmf_nmic 00:41:44.103 ************************************ 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:44.103 ************************************ 00:41:44.103 START TEST nvmf_fio_target 00:41:44.103 ************************************ 00:41:44.103 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:44.103 * Looking for test storage... 00:41:44.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:44.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.104 --rc genhtml_branch_coverage=1 00:41:44.104 --rc genhtml_function_coverage=1 00:41:44.104 --rc genhtml_legend=1 00:41:44.104 --rc geninfo_all_blocks=1 00:41:44.104 --rc geninfo_unexecuted_blocks=1 00:41:44.104 00:41:44.104 ' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:44.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.104 --rc genhtml_branch_coverage=1 00:41:44.104 --rc genhtml_function_coverage=1 00:41:44.104 --rc genhtml_legend=1 00:41:44.104 --rc geninfo_all_blocks=1 00:41:44.104 --rc geninfo_unexecuted_blocks=1 00:41:44.104 00:41:44.104 ' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:44.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.104 --rc genhtml_branch_coverage=1 00:41:44.104 --rc genhtml_function_coverage=1 00:41:44.104 --rc genhtml_legend=1 00:41:44.104 --rc geninfo_all_blocks=1 00:41:44.104 --rc geninfo_unexecuted_blocks=1 00:41:44.104 00:41:44.104 ' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:44.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.104 --rc genhtml_branch_coverage=1 00:41:44.104 --rc genhtml_function_coverage=1 00:41:44.104 --rc genhtml_legend=1 00:41:44.104 --rc geninfo_all_blocks=1 00:41:44.104 --rc geninfo_unexecuted_blocks=1 00:41:44.104 00:41:44.104 ' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.104 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:44.105 13:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:50.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:50.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:50.677 Found net devices under 0000:af:00.0: cvl_0_0 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:50.677 Found net devices under 0000:af:00.1: cvl_0_1 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:50.677 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:50.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:50.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:41:50.678 00:41:50.678 --- 10.0.0.2 ping statistics --- 00:41:50.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.678 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:50.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:50.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:41:50.678 00:41:50.678 --- 10.0.0.1 ping statistics --- 00:41:50.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.678 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=650978 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 650978 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 650978 ']' 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:50.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:50.678 13:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.678 [2024-12-16 13:03:15.817596] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:50.678 [2024-12-16 13:03:15.818551] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:50.678 [2024-12-16 13:03:15.818589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:50.678 [2024-12-16 13:03:15.893564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:50.678 [2024-12-16 13:03:15.934744] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:50.678 [2024-12-16 13:03:15.934783] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:50.678 [2024-12-16 13:03:15.934789] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:50.678 [2024-12-16 13:03:15.934795] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:50.678 [2024-12-16 13:03:15.934800] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:50.678 [2024-12-16 13:03:15.934860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:50.678 [2024-12-16 13:03:15.934938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:50.678 [2024-12-16 13:03:15.935046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.678 [2024-12-16 13:03:15.935048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:50.678 [2024-12-16 13:03:16.010056] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:50.678 [2024-12-16 13:03:16.010531] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:50.678 [2024-12-16 13:03:16.010812] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:50.678 [2024-12-16 13:03:16.010951] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:50.678 [2024-12-16 13:03:16.011409] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:50.678 [2024-12-16 13:03:16.243875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:50.678 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.938 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:50.938 13:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.197 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:51.197 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:51.456 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.456 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:51.456 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.715 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:51.715 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.974 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:51.974 13:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:52.232 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:52.232 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:52.233 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:52.492 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:52.492 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:52.751 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:53.010 [2024-12-16 13:03:18.863800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:53.010 13:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:53.269 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:53.269 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:53.528 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:53.528 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:41:53.528 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:53.528 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:41:53.528 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:41:53.528 13:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:41:55.431 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:55.431 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:55.431 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:55.690 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:41:55.690 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:55.690 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:41:55.690 13:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:55.690 [global] 00:41:55.690 thread=1 00:41:55.690 invalidate=1 00:41:55.690 rw=write 00:41:55.690 time_based=1 00:41:55.690 runtime=1 00:41:55.690 ioengine=libaio 00:41:55.690 direct=1 00:41:55.690 bs=4096 00:41:55.690 iodepth=1 00:41:55.690 norandommap=0 00:41:55.690 numjobs=1 00:41:55.690 00:41:55.690 verify_dump=1 00:41:55.690 verify_backlog=512 00:41:55.690 verify_state_save=0 00:41:55.690 do_verify=1 00:41:55.690 verify=crc32c-intel 00:41:55.690 [job0] 00:41:55.690 filename=/dev/nvme0n1 00:41:55.690 [job1] 00:41:55.690 filename=/dev/nvme0n2 00:41:55.690 [job2] 00:41:55.690 filename=/dev/nvme0n3 00:41:55.690 [job3] 00:41:55.690 filename=/dev/nvme0n4 00:41:55.690 Could not set queue depth (nvme0n1) 00:41:55.690 Could not set queue depth (nvme0n2) 00:41:55.690 Could not set queue depth (nvme0n3) 00:41:55.690 Could not set queue depth (nvme0n4) 00:41:55.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.949 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.949 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.949 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.949 fio-3.35 00:41:55.949 Starting 4 threads 00:41:57.327 00:41:57.327 job0: (groupid=0, jobs=1): err= 0: pid=652191: Mon Dec 16 13:03:23 2024 00:41:57.327 read: IOPS=2156, BW=8627KiB/s (8834kB/s)(8636KiB/1001msec) 00:41:57.327 slat (nsec): min=4845, max=26509, avg=6268.43, stdev=822.43 00:41:57.327 clat (usec): min=195, max=589, avg=252.83, stdev=67.67 00:41:57.327 lat (usec): min=202, max=597, avg=259.10, stdev=67.86 00:41:57.327 clat percentiles (usec): 00:41:57.327 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:41:57.327 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:41:57.327 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 289], 95.00th=[ 449], 00:41:57.327 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 570], 00:41:57.327 | 99.99th=[ 586] 00:41:57.327 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:57.327 slat (usec): min=5, max=599, avg= 7.40, stdev=11.75 00:41:57.327 clat (usec): min=118, max=683, avg=161.50, stdev=28.20 00:41:57.327 lat (usec): min=125, max=794, avg=168.90, stdev=31.00 00:41:57.327 clat percentiles (usec): 00:41:57.327 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:41:57.327 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:41:57.327 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 198], 00:41:57.327 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 519], 99.95th=[ 553], 00:41:57.327 | 99.99th=[ 685] 00:41:57.327 bw ( KiB/s): min=11432, max=11432, per=32.38%, avg=11432.00, stdev= 0.00, samples=1 00:41:57.327 iops : min= 2858, max= 2858, avg=2858.00, stdev= 0.00, samples=1 00:41:57.327 lat (usec) : 250=87.18%, 500=11.13%, 750=1.70% 00:41:57.327 cpu : usr=1.20%, sys=3.60%, ctx=4723, majf=0, minf=1 00:41:57.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.327 issued rwts: total=2159,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.327 job1: (groupid=0, jobs=1): err= 0: pid=652193: Mon Dec 16 13:03:23 2024 00:41:57.327 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:41:57.327 slat (nsec): min=6809, max=23992, avg=7778.16, stdev=933.70 00:41:57.327 clat (usec): min=188, max=534, avg=269.07, stdev=73.91 00:41:57.327 lat (usec): min=195, max=544, avg=276.84, stdev=73.95 00:41:57.327 clat percentiles (usec): 00:41:57.327 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:41:57.327 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:41:57.327 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 343], 95.00th=[ 494], 00:41:57.327 | 99.00th=[ 510], 99.50th=[ 519], 99.90th=[ 523], 99.95th=[ 529], 00:41:57.327 | 99.99th=[ 537] 00:41:57.327 write: IOPS=2212, BW=8851KiB/s (9064kB/s)(8860KiB/1001msec); 0 zone resets 00:41:57.327 slat (nsec): min=9469, max=47244, avg=11196.93, stdev=2101.05 00:41:57.327 clat (usec): min=129, max=1029, avg=178.75, stdev=33.96 00:41:57.327 lat (usec): min=140, max=1042, avg=189.95, stdev=34.26 00:41:57.327 clat percentiles (usec): 00:41:57.327 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:41:57.327 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:41:57.327 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 235], 00:41:57.327 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 553], 99.95th=[ 619], 00:41:57.327 | 99.99th=[ 1029] 00:41:57.327 bw ( KiB/s): min= 8192, max= 8192, per=23.21%, avg=8192.00, stdev= 0.00, samples=1 00:41:57.327 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:57.327 lat (usec) : 250=76.21%, 500=21.91%, 750=1.85% 00:41:57.327 lat (msec) : 2=0.02% 00:41:57.327 cpu : usr=3.60%, sys=6.50%, ctx=4264, majf=0, minf=2 00:41:57.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.327 issued rwts: total=2048,2215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.327 job2: (groupid=0, jobs=1): err= 0: pid=652194: Mon Dec 16 13:03:23 2024 00:41:57.327 read: IOPS=1483, BW=5934KiB/s (6077kB/s)(6160KiB/1038msec) 00:41:57.327 slat (nsec): min=6977, max=26255, avg=7793.37, stdev=1173.08 00:41:57.327 clat (usec): min=204, max=41716, avg=410.09, stdev=2083.44 00:41:57.327 lat (usec): min=212, max=41726, avg=417.88, stdev=2084.11 00:41:57.327 clat percentiles (usec): 00:41:57.328 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:41:57.328 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 277], 00:41:57.328 | 70.00th=[ 297], 80.00th=[ 347], 90.00th=[ 486], 95.00th=[ 502], 00:41:57.328 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[41157], 99.95th=[41681], 00:41:57.328 | 99.99th=[41681] 00:41:57.328 write: IOPS=1973, BW=7892KiB/s (8082kB/s)(8192KiB/1038msec); 0 zone resets 00:41:57.328 slat (nsec): min=9499, max=42633, avg=10826.34, stdev=1492.96 00:41:57.328 clat (usec): min=128, max=330, avg=177.29, stdev=17.20 00:41:57.328 lat (usec): min=139, max=340, avg=188.11, stdev=17.31 00:41:57.328 clat percentiles (usec): 00:41:57.328 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:41:57.328 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:41:57.328 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 210], 00:41:57.328 | 99.00th=[ 237], 99.50th=[ 241], 99.90th=[ 251], 99.95th=[ 265], 00:41:57.328 | 99.99th=[ 330] 00:41:57.328 bw ( KiB/s): min= 8192, max= 8192, per=23.21%, avg=8192.00, stdev= 0.00, samples=2 00:41:57.328 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:41:57.328 lat (usec) : 250=68.62%, 500=29.18%, 750=2.09% 00:41:57.328 lat (msec) : 50=0.11% 00:41:57.328 cpu : usr=1.45%, sys=3.66%, ctx=3588, majf=0, minf=1 00:41:57.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.328 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.328 job3: (groupid=0, jobs=1): err= 0: pid=652195: Mon Dec 16 13:03:23 2024 00:41:57.328 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:41:57.328 slat (nsec): min=6602, max=15863, avg=7323.45, stdev=575.49 00:41:57.328 clat (usec): min=217, max=463, avg=251.49, stdev=13.76 00:41:57.328 lat (usec): min=225, max=470, avg=258.82, stdev=13.80 00:41:57.328 clat percentiles (usec): 00:41:57.328 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:41:57.328 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:41:57.328 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:41:57.328 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 396], 99.95th=[ 400], 00:41:57.328 | 99.99th=[ 465] 00:41:57.328 write: IOPS=2335, BW=9343KiB/s (9567kB/s)(9352KiB/1001msec); 0 zone resets 00:41:57.328 slat (nsec): min=4180, max=39626, avg=10353.13, stdev=1279.65 00:41:57.328 clat (usec): min=155, max=841, avg=186.56, stdev=20.63 00:41:57.328 lat (usec): min=166, max=851, avg=196.91, stdev=20.53 00:41:57.328 clat percentiles (usec): 00:41:57.328 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 176], 00:41:57.328 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:41:57.328 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:41:57.328 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 314], 99.95th=[ 375], 00:41:57.328 | 99.99th=[ 840] 00:41:57.328 bw ( KiB/s): min= 9272, max= 9272, per=26.26%, avg=9272.00, stdev= 0.00, samples=1 00:41:57.328 iops : min= 2318, max= 2318, avg=2318.00, stdev= 0.00, samples=1 00:41:57.328 lat (usec) : 250=75.58%, 500=24.40%, 1000=0.02% 00:41:57.328 cpu : usr=2.20%, sys=4.10%, ctx=4386, majf=0, minf=2 00:41:57.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.328 issued rwts: total=2048,2338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:57.328 00:41:57.328 Run status group 0 (all jobs): 00:41:57.328 READ: bw=29.3MiB/s (30.8MB/s), 5934KiB/s-8627KiB/s (6077kB/s-8834kB/s), io=30.4MiB (31.9MB), run=1001-1038msec 00:41:57.328 WRITE: bw=34.5MiB/s (36.1MB/s), 7892KiB/s-9.99MiB/s (8082kB/s-10.5MB/s), io=35.8MiB (37.5MB), run=1001-1038msec 00:41:57.328 00:41:57.328 Disk stats (read/write): 00:41:57.328 nvme0n1: ios=1840/2048, merge=0/0, ticks=699/328, in_queue=1027, util=99.80% 00:41:57.328 nvme0n2: ios=1536/1715, merge=0/0, ticks=423/291, in_queue=714, util=81.39% 00:41:57.328 nvme0n3: ios=1539/2048, merge=0/0, ticks=582/349, in_queue=931, util=89.70% 00:41:57.328 nvme0n4: ios=1536/1926, merge=0/0, ticks=378/352, in_queue=730, util=88.93% 00:41:57.328 13:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:57.328 [global] 00:41:57.328 thread=1 00:41:57.328 invalidate=1 00:41:57.328 rw=randwrite 00:41:57.328 time_based=1 00:41:57.328 runtime=1 00:41:57.328 ioengine=libaio 00:41:57.328 direct=1 00:41:57.328 bs=4096 00:41:57.328 iodepth=1 00:41:57.328 norandommap=0 00:41:57.328 numjobs=1 00:41:57.328 00:41:57.328 verify_dump=1 00:41:57.328 verify_backlog=512 00:41:57.328 verify_state_save=0 00:41:57.328 do_verify=1 00:41:57.328 verify=crc32c-intel 00:41:57.328 [job0] 00:41:57.328 filename=/dev/nvme0n1 00:41:57.328 [job1] 00:41:57.328 filename=/dev/nvme0n2 00:41:57.328 [job2] 00:41:57.328 filename=/dev/nvme0n3 00:41:57.328 [job3] 00:41:57.328 filename=/dev/nvme0n4 00:41:57.328 Could not set queue depth (nvme0n1) 00:41:57.328 Could not set queue depth (nvme0n2) 00:41:57.328 Could not set queue depth (nvme0n3) 00:41:57.328 Could not set queue depth (nvme0n4) 00:41:57.587 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.587 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.587 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.587 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.587 fio-3.35 00:41:57.587 Starting 4 threads 00:41:58.992 00:41:58.992 job0: (groupid=0, jobs=1): err= 0: pid=652553: Mon Dec 16 13:03:24 2024 00:41:58.992 read: IOPS=552, BW=2209KiB/s (2263kB/s)(2236KiB/1012msec) 00:41:58.992 slat (nsec): min=6546, max=26019, avg=7810.68, stdev=2744.27 00:41:58.992 clat (usec): min=203, max=41509, avg=1420.65, stdev=6840.93 00:41:58.992 lat (usec): min=210, max=41516, avg=1428.46, stdev=6841.20 00:41:58.992 clat percentiles (usec): 00:41:58.992 | 1.00th=[ 206], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 243], 00:41:58.992 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:41:58.992 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 265], 00:41:58.992 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:58.992 | 99.99th=[41681] 00:41:58.992 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:41:58.992 slat (nsec): min=9388, max=42606, avg=10480.75, stdev=1794.79 00:41:58.992 clat (usec): min=132, max=453, avg=194.14, stdev=31.39 00:41:58.992 lat (usec): min=142, max=463, avg=204.63, stdev=31.74 00:41:58.992 clat percentiles (usec): 00:41:58.992 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 176], 00:41:58.992 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 196], 00:41:58.992 | 70.00th=[ 202], 80.00th=[ 219], 90.00th=[ 241], 95.00th=[ 245], 00:41:58.992 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 355], 99.95th=[ 453], 00:41:58.992 | 99.99th=[ 453] 00:41:58.992 bw ( KiB/s): min= 8192, max= 8192, per=51.65%, avg=8192.00, stdev= 0.00, samples=1 00:41:58.992 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:58.992 lat (usec) : 250=88.00%, 500=10.93%, 750=0.06% 00:41:58.992 lat (msec) : 50=1.01% 00:41:58.992 cpu : usr=0.99%, sys=1.29%, ctx=1585, majf=0, minf=1 00:41:58.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.992 issued rwts: total=559,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.992 job1: (groupid=0, jobs=1): err= 0: pid=652554: Mon Dec 16 13:03:24 2024 00:41:58.992 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:41:58.992 slat (nsec): min=10621, max=25050, avg=22781.18, stdev=2843.88 00:41:58.992 clat (usec): min=40850, max=41090, avg=40970.70, stdev=68.58 00:41:58.992 lat (usec): min=40873, max=41101, avg=40993.48, stdev=67.62 00:41:58.992 clat percentiles (usec): 00:41:58.992 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:58.992 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:58.992 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:58.992 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:58.992 | 99.99th=[41157] 00:41:58.992 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:41:58.992 slat (nsec): min=9611, max=37032, avg=12015.37, stdev=2364.60 00:41:58.992 clat (usec): min=143, max=545, avg=197.03, stdev=31.64 00:41:58.992 lat (usec): min=156, max=582, avg=209.05, stdev=32.42 00:41:58.992 clat percentiles (usec): 00:41:58.992 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:41:58.992 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 194], 00:41:58.992 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 239], 00:41:58.992 | 99.00th=[ 285], 99.50th=[ 351], 99.90th=[ 545], 99.95th=[ 545], 00:41:58.992 | 99.99th=[ 545] 00:41:58.992 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:41:58.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:58.992 lat (usec) : 250=92.88%, 500=2.81%, 750=0.19% 00:41:58.992 lat (msec) : 50=4.12% 00:41:58.992 cpu : usr=0.59%, sys=0.79%, ctx=535, majf=0, minf=1 00:41:58.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.992 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.992 job2: (groupid=0, jobs=1): err= 0: pid=652555: Mon Dec 16 13:03:24 2024 00:41:58.992 read: IOPS=1494, BW=5979KiB/s (6122kB/s)(6176KiB/1033msec) 00:41:58.992 slat (nsec): min=5090, max=25269, avg=7328.62, stdev=1291.13 00:41:58.992 clat (usec): min=191, max=42106, avg=437.75, stdev=2954.93 00:41:58.992 lat (usec): min=199, max=42115, avg=445.08, stdev=2955.89 00:41:58.992 clat percentiles (usec): 00:41:58.992 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 204], 00:41:58.992 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 229], 00:41:58.992 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 255], 00:41:58.992 | 99.00th=[ 347], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:41:58.992 | 99.99th=[42206] 00:41:58.992 write: IOPS=1982, BW=7930KiB/s (8121kB/s)(8192KiB/1033msec); 0 zone resets 00:41:58.992 slat (nsec): min=6869, max=50770, avg=9853.35, stdev=2189.00 00:41:58.992 clat (usec): min=126, max=436, avg=155.16, stdev=22.21 00:41:58.992 lat (usec): min=136, max=444, avg=165.01, stdev=21.97 00:41:58.992 clat percentiles (usec): 00:41:58.992 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:41:58.992 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 155], 00:41:58.992 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 194], 00:41:58.992 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 302], 99.95th=[ 367], 00:41:58.992 | 99.99th=[ 437] 00:41:58.992 bw ( KiB/s): min= 4688, max=11696, per=51.65%, avg=8192.00, stdev=4955.40, samples=2 00:41:58.992 iops : min= 1172, max= 2924, avg=2048.00, stdev=1238.85, samples=2 00:41:58.992 lat (usec) : 250=93.90%, 500=5.85%, 750=0.03% 00:41:58.992 lat (msec) : 50=0.22% 00:41:58.992 cpu : usr=1.65%, sys=3.10%, ctx=3592, majf=0, minf=2 00:41:58.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.993 issued rwts: total=1544,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.993 job3: (groupid=0, jobs=1): err= 0: pid=652556: Mon Dec 16 13:03:24 2024 00:41:58.993 read: IOPS=24, BW=98.9KiB/s (101kB/s)(100KiB/1011msec) 00:41:58.993 slat (nsec): min=10671, max=27664, avg=16516.60, stdev=5276.93 00:41:58.993 clat (usec): min=294, max=41096, avg=36071.15, stdev=13466.04 00:41:58.993 lat (usec): min=318, max=41111, avg=36087.66, stdev=13464.24 00:41:58.993 clat percentiles (usec): 00:41:58.993 | 1.00th=[ 293], 5.00th=[ 338], 10.00th=[ 396], 20.00th=[40633], 00:41:58.993 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:58.993 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:58.993 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:58.993 | 99.99th=[41157] 00:41:58.993 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:41:58.993 slat (nsec): min=9995, max=39622, avg=15881.84, stdev=4994.21 00:41:58.993 clat (usec): min=145, max=478, avg=191.72, stdev=31.77 00:41:58.993 lat (usec): min=156, max=492, avg=207.60, stdev=30.95 00:41:58.993 clat percentiles (usec): 00:41:58.993 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:41:58.993 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:41:58.993 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 241], 00:41:58.993 | 99.00th=[ 289], 99.50th=[ 343], 99.90th=[ 478], 99.95th=[ 478], 00:41:58.993 | 99.99th=[ 478] 00:41:58.993 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:41:58.993 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:58.993 lat (usec) : 250=92.36%, 500=3.54% 00:41:58.993 lat (msec) : 50=4.10% 00:41:58.993 cpu : usr=0.20%, sys=0.89%, ctx=538, majf=0, minf=1 00:41:58.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.993 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.993 00:41:58.993 Run status group 0 (all jobs): 00:41:58.993 READ: bw=8325KiB/s (8525kB/s), 87.0KiB/s-5979KiB/s (89.1kB/s-6122kB/s), io=8600KiB (8806kB), run=1011-1033msec 00:41:58.993 WRITE: bw=15.5MiB/s (16.2MB/s), 2026KiB/s-7930KiB/s (2074kB/s-8121kB/s), io=16.0MiB (16.8MB), run=1011-1033msec 00:41:58.993 00:41:58.993 Disk stats (read/write): 00:41:58.993 nvme0n1: ios=572/1024, merge=0/0, ticks=1524/195, in_queue=1719, util=89.48% 00:41:58.993 nvme0n2: ios=53/512, merge=0/0, ticks=999/93, in_queue=1092, util=96.95% 00:41:58.993 nvme0n3: ios=1596/2048, merge=0/0, ticks=635/314, in_queue=949, util=94.46% 00:41:58.993 nvme0n4: ios=83/512, merge=0/0, ticks=1338/93, in_queue=1431, util=98.63% 00:41:58.993 13:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:58.993 [global] 00:41:58.993 thread=1 00:41:58.993 invalidate=1 00:41:58.993 rw=write 00:41:58.993 time_based=1 00:41:58.993 runtime=1 00:41:58.993 ioengine=libaio 00:41:58.993 direct=1 00:41:58.993 bs=4096 00:41:58.993 iodepth=128 00:41:58.993 norandommap=0 00:41:58.993 numjobs=1 00:41:58.993 00:41:58.993 verify_dump=1 00:41:58.993 verify_backlog=512 00:41:58.993 verify_state_save=0 00:41:58.993 do_verify=1 00:41:58.993 verify=crc32c-intel 00:41:58.993 [job0] 00:41:58.993 filename=/dev/nvme0n1 00:41:58.993 [job1] 00:41:58.993 filename=/dev/nvme0n2 00:41:58.993 [job2] 00:41:58.993 filename=/dev/nvme0n3 00:41:58.993 [job3] 00:41:58.993 filename=/dev/nvme0n4 00:41:58.993 Could not set queue depth (nvme0n1) 00:41:58.993 Could not set queue depth (nvme0n2) 00:41:58.993 Could not set queue depth (nvme0n3) 00:41:58.993 Could not set queue depth (nvme0n4) 00:41:59.259 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.259 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.259 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.259 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.259 fio-3.35 00:41:59.259 Starting 4 threads 00:42:00.632 00:42:00.632 job0: (groupid=0, jobs=1): err= 0: pid=652921: Mon Dec 16 13:03:26 2024 00:42:00.632 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:42:00.632 slat (nsec): min=1068, max=12650k, avg=123487.85, stdev=793494.66 00:42:00.632 clat (usec): min=5005, max=34081, avg=16136.79, stdev=5765.17 00:42:00.632 lat (usec): min=5010, max=34093, avg=16260.28, stdev=5827.01 00:42:00.632 clat percentiles (usec): 00:42:00.632 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 9503], 20.00th=[11076], 00:42:00.632 | 30.00th=[12125], 40.00th=[13960], 50.00th=[15926], 60.00th=[17695], 00:42:00.632 | 70.00th=[20055], 80.00th=[20841], 90.00th=[22938], 95.00th=[25560], 00:42:00.632 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:42:00.632 | 99.99th=[34341] 00:42:00.632 write: IOPS=3699, BW=14.4MiB/s (15.2MB/s)(14.5MiB/1004msec); 0 zone resets 00:42:00.632 slat (nsec): min=1973, max=47640k, avg=133905.70, stdev=1085622.61 00:42:00.632 clat (usec): min=1536, max=57992, avg=16503.76, stdev=8409.14 00:42:00.632 lat (usec): min=1545, max=70250, avg=16637.67, stdev=8511.83 00:42:00.632 clat percentiles (usec): 00:42:00.632 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 8029], 20.00th=[ 9503], 00:42:00.632 | 30.00th=[10421], 40.00th=[12387], 50.00th=[14353], 60.00th=[17433], 00:42:00.632 | 70.00th=[19006], 80.00th=[21627], 90.00th=[29492], 95.00th=[35914], 00:42:00.632 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:42:00.632 | 99.99th=[57934] 00:42:00.632 bw ( KiB/s): min=12560, max=16176, per=19.58%, avg=14368.00, stdev=2556.90, samples=2 00:42:00.632 iops : min= 3140, max= 4044, avg=3592.00, stdev=639.22, samples=2 00:42:00.632 lat (msec) : 2=0.03%, 4=0.12%, 10=19.05%, 20=51.58%, 50=29.21% 00:42:00.632 lat (msec) : 100=0.01% 00:42:00.632 cpu : usr=2.89%, sys=4.29%, ctx=339, majf=0, minf=1 00:42:00.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:42:00.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:00.632 issued rwts: total=3584,3714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:00.632 job1: (groupid=0, jobs=1): err= 0: pid=652922: Mon Dec 16 13:03:26 2024 00:42:00.632 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:42:00.632 slat (nsec): min=1351, max=44149k, avg=86672.12, stdev=718577.69 00:42:00.632 clat (usec): min=4229, max=58614, avg=11242.34, stdev=6577.50 00:42:00.632 lat (usec): min=4239, max=58622, avg=11329.01, stdev=6612.04 00:42:00.632 clat percentiles (usec): 00:42:00.632 | 1.00th=[ 5276], 5.00th=[ 7767], 10.00th=[ 8291], 20.00th=[ 9503], 00:42:00.632 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:42:00.632 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12387], 95.00th=[13829], 00:42:00.632 | 99.00th=[52691], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:42:00.632 | 99.99th=[58459] 00:42:00.632 write: IOPS=5977, BW=23.3MiB/s (24.5MB/s)(23.4MiB/1004msec); 0 zone resets 00:42:00.632 slat (usec): min=2, max=7712, avg=77.56, stdev=400.25 00:42:00.632 clat (usec): min=401, max=58580, avg=10673.27, stdev=2108.63 00:42:00.632 lat (usec): min=406, max=59149, avg=10750.83, stdev=2133.45 00:42:00.632 clat percentiles (usec): 00:42:00.632 | 1.00th=[ 5276], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9503], 00:42:00.632 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:42:00.632 | 70.00th=[11469], 80.00th=[12125], 90.00th=[12518], 95.00th=[13435], 00:42:00.632 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:42:00.632 | 99.99th=[58459] 00:42:00.632 bw ( KiB/s): min=22408, max=24576, per=32.01%, avg=23492.00, stdev=1533.01, samples=2 00:42:00.632 iops : min= 5602, max= 6144, avg=5873.00, stdev=383.25, samples=2 00:42:00.632 lat (usec) : 500=0.05% 00:42:00.632 lat (msec) : 2=0.08%, 4=0.28%, 10=30.40%, 20=68.10%, 100=1.09% 00:42:00.632 cpu : usr=3.39%, sys=4.29%, ctx=748, majf=0, minf=2 00:42:00.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:42:00.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:00.632 issued rwts: total=5632,6001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:00.632 job2: (groupid=0, jobs=1): err= 0: pid=652923: Mon Dec 16 13:03:26 2024 00:42:00.632 read: IOPS=3615, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1003msec) 00:42:00.632 slat (nsec): min=1476, max=17224k, avg=130577.00, stdev=835023.03 00:42:00.632 clat (usec): min=1013, max=71206, avg=17214.14, stdev=10445.95 00:42:00.632 lat (usec): min=4621, max=72187, avg=17344.71, stdev=10497.64 00:42:00.632 clat percentiles (usec): 00:42:00.632 | 1.00th=[ 8094], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:42:00.632 | 30.00th=[11731], 40.00th=[11994], 50.00th=[13173], 60.00th=[13698], 00:42:00.632 | 70.00th=[15139], 80.00th=[21890], 90.00th=[32113], 95.00th=[41681], 00:42:00.632 | 99.00th=[57934], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:42:00.632 | 99.99th=[70779] 00:42:00.632 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:42:00.632 slat (usec): min=2, max=16166, avg=123.24, stdev=759.38 00:42:00.632 clat (usec): min=6448, max=72905, avg=15718.27, stdev=10161.87 00:42:00.632 lat (usec): min=6459, max=72908, avg=15841.51, stdev=10232.11 00:42:00.632 clat percentiles (usec): 00:42:00.632 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[11338], 20.00th=[11469], 00:42:00.632 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12649], 00:42:00.632 | 70.00th=[13698], 80.00th=[14877], 90.00th=[28967], 95.00th=[36963], 00:42:00.632 | 99.00th=[70779], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:42:00.632 | 99.99th=[72877] 00:42:00.632 bw ( KiB/s): min=12288, max=19792, per=21.86%, avg=16040.00, stdev=5306.13, samples=2 00:42:00.632 iops : min= 3072, max= 4948, avg=4010.00, stdev=1326.53, samples=2 00:42:00.632 lat (msec) : 2=0.01%, 10=2.36%, 20=81.00%, 50=14.88%, 100=1.75% 00:42:00.632 cpu : usr=3.49%, sys=4.49%, ctx=386, majf=0, minf=1 00:42:00.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:00.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:00.633 issued rwts: total=3626,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:00.633 job3: (groupid=0, jobs=1): err= 0: pid=652924: Mon Dec 16 13:03:26 2024 00:42:00.633 read: IOPS=4476, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:42:00.633 slat (nsec): min=1177, max=9688.7k, avg=104982.88, stdev=651253.22 00:42:00.633 clat (usec): min=2527, max=30401, avg=13882.53, stdev=4408.67 00:42:00.633 lat (usec): min=2532, max=30426, avg=13987.51, stdev=4458.67 00:42:00.633 clat percentiles (usec): 00:42:00.633 | 1.00th=[ 4883], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[10290], 00:42:00.633 | 30.00th=[11207], 40.00th=[12125], 50.00th=[13173], 60.00th=[14484], 00:42:00.633 | 70.00th=[15795], 80.00th=[17433], 90.00th=[21365], 95.00th=[22414], 00:42:00.633 | 99.00th=[25297], 99.50th=[26084], 99.90th=[26870], 99.95th=[27132], 00:42:00.633 | 99.99th=[30278] 00:42:00.633 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:42:00.633 slat (usec): min=2, max=9672, avg=106.02, stdev=634.25 00:42:00.633 clat (usec): min=3442, max=34715, avg=14045.21, stdev=5149.80 00:42:00.633 lat (usec): min=3457, max=34721, avg=14151.23, stdev=5197.66 00:42:00.633 clat percentiles (usec): 00:42:00.633 | 1.00th=[ 6652], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[10290], 00:42:00.633 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[13042], 00:42:00.633 | 70.00th=[16319], 80.00th=[19530], 90.00th=[20579], 95.00th=[23200], 00:42:00.633 | 99.00th=[28967], 99.50th=[30278], 99.90th=[34866], 99.95th=[34866], 00:42:00.633 | 99.99th=[34866] 00:42:00.633 bw ( KiB/s): min=16384, max=20480, per=25.12%, avg=18432.00, stdev=2896.31, samples=2 00:42:00.633 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:42:00.633 lat (msec) : 4=0.24%, 10=17.81%, 20=67.86%, 50=14.09% 00:42:00.633 cpu : usr=3.59%, sys=4.29%, ctx=447, majf=0, minf=1 00:42:00.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:00.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:00.633 issued rwts: total=4490,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:00.633 00:42:00.633 Run status group 0 (all jobs): 00:42:00.633 READ: bw=67.4MiB/s (70.7MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-23.0MB/s), io=67.7MiB (71.0MB), run=1003-1004msec 00:42:00.633 WRITE: bw=71.7MiB/s (75.1MB/s), 14.4MiB/s-23.3MiB/s (15.2MB/s-24.5MB/s), io=71.9MiB (75.4MB), run=1003-1004msec 00:42:00.633 00:42:00.633 Disk stats (read/write): 00:42:00.633 nvme0n1: ios=3094/3079, merge=0/0, ticks=33406/35779, in_queue=69185, util=96.09% 00:42:00.633 nvme0n2: ios=4640/5120, merge=0/0, ticks=23560/23755, in_queue=47315, util=86.15% 00:42:00.633 nvme0n3: ios=3074/3072, merge=0/0, ticks=17541/16873, in_queue=34414, util=89.74% 00:42:00.633 nvme0n4: ios=3823/4096, merge=0/0, ticks=34531/33585, in_queue=68116, util=96.30% 00:42:00.633 13:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:00.633 [global] 00:42:00.633 thread=1 00:42:00.633 invalidate=1 00:42:00.633 rw=randwrite 00:42:00.633 time_based=1 00:42:00.633 runtime=1 00:42:00.633 ioengine=libaio 00:42:00.633 direct=1 00:42:00.633 bs=4096 00:42:00.633 iodepth=128 00:42:00.633 norandommap=0 00:42:00.633 numjobs=1 00:42:00.633 00:42:00.633 verify_dump=1 00:42:00.633 verify_backlog=512 00:42:00.633 verify_state_save=0 00:42:00.633 do_verify=1 00:42:00.633 verify=crc32c-intel 00:42:00.633 [job0] 00:42:00.633 filename=/dev/nvme0n1 00:42:00.633 [job1] 00:42:00.633 filename=/dev/nvme0n2 00:42:00.633 [job2] 00:42:00.633 filename=/dev/nvme0n3 00:42:00.633 [job3] 00:42:00.633 filename=/dev/nvme0n4 00:42:00.633 Could not set queue depth (nvme0n1) 00:42:00.633 Could not set queue depth (nvme0n2) 00:42:00.633 Could not set queue depth (nvme0n3) 00:42:00.633 Could not set queue depth (nvme0n4) 00:42:00.633 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.633 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.633 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.633 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.633 fio-3.35 00:42:00.633 Starting 4 threads 00:42:02.005 00:42:02.005 job0: (groupid=0, jobs=1): err= 0: pid=653282: Mon Dec 16 13:03:27 2024 00:42:02.005 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:42:02.005 slat (nsec): min=1191, max=21078k, avg=101471.45, stdev=902001.00 00:42:02.005 clat (usec): min=1213, max=81404, avg=13893.06, stdev=10820.49 00:42:02.005 lat (usec): min=1218, max=81413, avg=13994.53, stdev=10902.19 00:42:02.005 clat percentiles (usec): 00:42:02.006 | 1.00th=[ 1614], 5.00th=[ 2278], 10.00th=[ 3752], 20.00th=[ 7504], 00:42:02.006 | 30.00th=[ 7767], 40.00th=[ 9110], 50.00th=[11076], 60.00th=[15008], 00:42:02.006 | 70.00th=[16057], 80.00th=[18482], 90.00th=[23987], 95.00th=[29754], 00:42:02.006 | 99.00th=[70779], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:42:02.006 | 99.99th=[81265] 00:42:02.006 write: IOPS=4824, BW=18.8MiB/s (19.8MB/s)(19.0MiB/1008msec); 0 zone resets 00:42:02.006 slat (nsec): min=1874, max=23196k, avg=85323.02, stdev=812947.57 00:42:02.006 clat (usec): min=414, max=81367, avg=13164.29, stdev=7440.95 00:42:02.006 lat (usec): min=436, max=81369, avg=13249.61, stdev=7487.85 00:42:02.006 clat percentiles (usec): 00:42:02.006 | 1.00th=[ 2442], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 7701], 00:42:02.006 | 30.00th=[ 8094], 40.00th=[ 9372], 50.00th=[12125], 60.00th=[14091], 00:42:02.006 | 70.00th=[15664], 80.00th=[16450], 90.00th=[21627], 95.00th=[27657], 00:42:02.006 | 99.00th=[35390], 99.50th=[46400], 99.90th=[62129], 99.95th=[62129], 00:42:02.006 | 99.99th=[81265] 00:42:02.006 bw ( KiB/s): min=15312, max=22576, per=30.68%, avg=18944.00, stdev=5136.42, samples=2 00:42:02.006 iops : min= 3828, max= 5644, avg=4736.00, stdev=1284.11, samples=2 00:42:02.006 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.08% 00:42:02.006 lat (msec) : 2=2.35%, 4=4.81%, 10=34.82%, 20=43.11%, 50=13.66% 00:42:02.006 lat (msec) : 100=1.09% 00:42:02.006 cpu : usr=2.88%, sys=5.56%, ctx=391, majf=0, minf=1 00:42:02.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:42:02.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:02.006 issued rwts: total=4608,4863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:02.006 job1: (groupid=0, jobs=1): err= 0: pid=653283: Mon Dec 16 13:03:27 2024 00:42:02.006 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:42:02.006 slat (nsec): min=1034, max=26232k, avg=92936.76, stdev=752359.87 00:42:02.006 clat (usec): min=2699, max=32257, avg=10929.43, stdev=4334.11 00:42:02.006 lat (usec): min=2707, max=32262, avg=11022.36, stdev=4399.35 00:42:02.006 clat percentiles (usec): 00:42:02.006 | 1.00th=[ 5080], 5.00th=[ 6194], 10.00th=[ 7504], 20.00th=[ 7832], 00:42:02.006 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10028], 00:42:02.006 | 70.00th=[12387], 80.00th=[14877], 90.00th=[16188], 95.00th=[16909], 00:42:02.006 | 99.00th=[27657], 99.50th=[30540], 99.90th=[32113], 99.95th=[32375], 00:42:02.006 | 99.99th=[32375] 00:42:02.006 write: IOPS=4256, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1009msec); 0 zone resets 00:42:02.006 slat (nsec): min=1817, max=34060k, avg=135942.78, stdev=1353982.37 00:42:02.006 clat (msec): min=2, max=100, avg=18.35, stdev=21.95 00:42:02.006 lat (msec): min=2, max=100, avg=18.48, stdev=22.10 00:42:02.006 clat percentiles (msec): 00:42:02.006 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:42:02.006 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 12], 00:42:02.006 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 51], 95.00th=[ 81], 00:42:02.006 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:42:02.006 | 99.99th=[ 102] 00:42:02.006 bw ( KiB/s): min=10224, max=23112, per=27.00%, avg=16668.00, stdev=9113.19, samples=2 00:42:02.006 iops : min= 2556, max= 5778, avg=4167.00, stdev=2278.30, samples=2 00:42:02.006 lat (msec) : 4=1.10%, 10=56.39%, 20=31.88%, 50=5.32%, 100=4.95% 00:42:02.006 lat (msec) : 250=0.37% 00:42:02.006 cpu : usr=2.78%, sys=4.46%, ctx=449, majf=0, minf=1 00:42:02.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:02.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:02.006 issued rwts: total=4096,4295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:02.006 job2: (groupid=0, jobs=1): err= 0: pid=653285: Mon Dec 16 13:03:27 2024 00:42:02.006 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:42:02.006 slat (nsec): min=1175, max=30260k, avg=105450.71, stdev=955274.61 00:42:02.006 clat (usec): min=892, max=65823, avg=14329.90, stdev=7192.85 00:42:02.006 lat (usec): min=909, max=65829, avg=14435.35, stdev=7253.63 00:42:02.006 clat percentiles (usec): 00:42:02.006 | 1.00th=[ 2540], 5.00th=[ 6915], 10.00th=[ 8094], 20.00th=[ 9241], 00:42:02.006 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12911], 60.00th=[14353], 00:42:02.006 | 70.00th=[15664], 80.00th=[17957], 90.00th=[24249], 95.00th=[30802], 00:42:02.006 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[51643], 00:42:02.006 | 99.99th=[65799] 00:42:02.006 write: IOPS=4731, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1008msec); 0 zone resets 00:42:02.006 slat (usec): min=2, max=16690, avg=84.79, stdev=726.79 00:42:02.006 clat (usec): min=644, max=38494, avg=12674.28, stdev=6025.38 00:42:02.006 lat (usec): min=649, max=42684, avg=12759.07, stdev=6057.48 00:42:02.006 clat percentiles (usec): 00:42:02.006 | 1.00th=[ 2999], 5.00th=[ 5997], 10.00th=[ 6849], 20.00th=[ 8094], 00:42:02.006 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[11207], 60.00th=[12649], 00:42:02.006 | 70.00th=[13960], 80.00th=[16909], 90.00th=[21365], 95.00th=[24773], 00:42:02.006 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[31065], 00:42:02.006 | 99.99th=[38536] 00:42:02.006 bw ( KiB/s): min=16688, max=20480, per=30.10%, avg=18584.00, stdev=2681.35, samples=2 00:42:02.006 iops : min= 4172, max= 5120, avg=4646.00, stdev=670.34, samples=2 00:42:02.006 lat (usec) : 750=0.07%, 1000=0.01% 00:42:02.006 lat (msec) : 2=0.26%, 4=2.68%, 10=36.16%, 20=44.26%, 50=16.53% 00:42:02.006 lat (msec) : 100=0.03% 00:42:02.006 cpu : usr=3.97%, sys=6.95%, ctx=258, majf=0, minf=1 00:42:02.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:42:02.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:02.006 issued rwts: total=4608,4769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:02.006 job3: (groupid=0, jobs=1): err= 0: pid=653291: Mon Dec 16 13:03:27 2024 00:42:02.006 read: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec) 00:42:02.006 slat (usec): min=3, max=23376, avg=264.22, stdev=1789.22 00:42:02.006 clat (msec): min=7, max=174, avg=29.16, stdev=24.47 00:42:02.006 lat (msec): min=7, max=174, avg=29.43, stdev=24.77 00:42:02.006 clat percentiles (msec): 00:42:02.006 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 19], 00:42:02.006 | 30.00th=[ 22], 40.00th=[ 22], 50.00th=[ 25], 60.00th=[ 25], 00:42:02.006 | 70.00th=[ 26], 80.00th=[ 33], 90.00th=[ 39], 95.00th=[ 79], 00:42:02.006 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 176], 00:42:02.006 | 99.99th=[ 176] 00:42:02.006 write: IOPS=1701, BW=6805KiB/s (6968kB/s)(6900KiB/1014msec); 0 zone resets 00:42:02.006 slat (usec): min=4, max=17466, avg=336.99, stdev=1663.97 00:42:02.006 clat (usec): min=1666, max=174933, avg=48584.52, stdev=43815.31 00:42:02.006 lat (usec): min=1677, max=174956, avg=48921.52, stdev=44054.92 00:42:02.006 clat percentiles (msec): 00:42:02.006 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 16], 00:42:02.006 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 25], 60.00th=[ 35], 00:42:02.006 | 70.00th=[ 72], 80.00th=[ 96], 90.00th=[ 113], 95.00th=[ 128], 00:42:02.006 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 176], 00:42:02.006 | 99.99th=[ 176] 00:42:02.006 bw ( KiB/s): min= 3128, max= 9648, per=10.35%, avg=6388.00, stdev=4610.34, samples=2 00:42:02.006 iops : min= 782, max= 2412, avg=1597.00, stdev=1152.58, samples=2 00:42:02.006 lat (msec) : 2=0.06%, 10=5.03%, 20=27.35%, 50=46.27%, 100=10.40% 00:42:02.006 lat (msec) : 250=10.89% 00:42:02.006 cpu : usr=1.48%, sys=3.06%, ctx=131, majf=0, minf=1 00:42:02.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:42:02.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:02.006 issued rwts: total=1536,1725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:02.006 00:42:02.006 Run status group 0 (all jobs): 00:42:02.006 READ: bw=57.2MiB/s (60.0MB/s), 6059KiB/s-17.9MiB/s (6205kB/s-18.7MB/s), io=58.0MiB (60.8MB), run=1008-1014msec 00:42:02.006 WRITE: bw=60.3MiB/s (63.2MB/s), 6805KiB/s-18.8MiB/s (6968kB/s-19.8MB/s), io=61.1MiB (64.1MB), run=1008-1014msec 00:42:02.006 00:42:02.006 Disk stats (read/write): 00:42:02.006 nvme0n1: ios=3634/3902, merge=0/0, ticks=48993/46600, in_queue=95593, util=80.46% 00:42:02.006 nvme0n2: ios=2591/2967, merge=0/0, ticks=30528/36873, in_queue=67401, util=96.49% 00:42:02.006 nvme0n3: ios=3747/4096, merge=0/0, ticks=48347/42301, in_queue=90648, util=100.00% 00:42:02.006 nvme0n4: ios=1024/1495, merge=0/0, ticks=22695/63605, in_queue=86300, util=88.98% 00:42:02.006 13:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:02.006 13:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=653514 00:42:02.006 13:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:02.006 13:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:02.006 [global] 00:42:02.006 thread=1 00:42:02.006 invalidate=1 00:42:02.006 rw=read 00:42:02.006 time_based=1 00:42:02.006 runtime=10 00:42:02.006 ioengine=libaio 00:42:02.006 direct=1 00:42:02.006 bs=4096 00:42:02.006 iodepth=1 00:42:02.006 norandommap=1 00:42:02.006 numjobs=1 00:42:02.006 00:42:02.006 [job0] 00:42:02.006 filename=/dev/nvme0n1 00:42:02.006 [job1] 00:42:02.006 filename=/dev/nvme0n2 00:42:02.006 [job2] 00:42:02.006 filename=/dev/nvme0n3 00:42:02.006 [job3] 00:42:02.006 filename=/dev/nvme0n4 00:42:02.006 Could not set queue depth (nvme0n1) 00:42:02.006 Could not set queue depth (nvme0n2) 00:42:02.006 Could not set queue depth (nvme0n3) 00:42:02.006 Could not set queue depth (nvme0n4) 00:42:02.264 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:02.264 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:02.264 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:02.264 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:02.264 fio-3.35 00:42:02.264 Starting 4 threads 00:42:05.543 13:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:05.543 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33009664, buflen=4096 00:42:05.543 fio: pid=653655, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:05.543 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:05.543 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:05.543 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:05.543 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14700544, buflen=4096 00:42:05.543 fio: pid=653654, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:05.543 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=18341888, buflen=4096 00:42:05.543 fio: pid=653652, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:05.543 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:05.543 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:05.802 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:05.802 13:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:05.802 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=425984, buflen=4096 00:42:05.802 fio: pid=653653, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:05.802 00:42:05.802 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=653652: Mon Dec 16 13:03:31 2024 00:42:05.802 read: IOPS=1449, BW=5799KiB/s (5938kB/s)(17.5MiB/3089msec) 00:42:05.802 slat (usec): min=2, max=8779, avg= 9.46, stdev=131.09 00:42:05.802 clat (usec): min=199, max=42001, avg=674.43, stdev=4134.88 00:42:05.802 lat (usec): min=207, max=49941, avg=683.89, stdev=4157.51 00:42:05.802 clat percentiles (usec): 00:42:05.802 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:42:05.802 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:42:05.802 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 326], 00:42:05.802 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:42:05.802 | 99.99th=[42206] 00:42:05.802 bw ( KiB/s): min= 96, max=15128, per=36.50%, avg=7142.40, stdev=6418.94, samples=5 00:42:05.802 iops : min= 24, max= 3782, avg=1785.60, stdev=1604.73, samples=5 00:42:05.802 lat (usec) : 250=63.61%, 500=35.19%, 750=0.04%, 1000=0.02% 00:42:05.802 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02%, 50=1.05% 00:42:05.802 cpu : usr=0.32%, sys=1.42%, ctx=4484, majf=0, minf=1 00:42:05.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 issued rwts: total=4479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.802 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=653653: Mon Dec 16 13:03:31 2024 00:42:05.802 read: IOPS=31, BW=125KiB/s (128kB/s)(416KiB/3318msec) 00:42:05.802 slat (usec): min=8, max=12430, avg=135.89, stdev=1211.43 00:42:05.802 clat (usec): min=202, max=42274, avg=31557.59, stdev=17238.77 00:42:05.802 lat (usec): min=213, max=53702, avg=31694.14, stdev=17347.62 00:42:05.802 clat percentiles (usec): 00:42:05.802 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 258], 00:42:05.802 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:05.802 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:05.802 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:05.802 | 99.99th=[42206] 00:42:05.802 bw ( KiB/s): min= 96, max= 184, per=0.62%, avg=121.83, stdev=35.97, samples=6 00:42:05.802 iops : min= 24, max= 46, avg=30.33, stdev= 8.89, samples=6 00:42:05.802 lat (usec) : 250=19.05%, 500=3.81% 00:42:05.802 lat (msec) : 50=76.19% 00:42:05.802 cpu : usr=0.12%, sys=0.00%, ctx=110, majf=0, minf=2 00:42:05.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.802 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=653654: Mon Dec 16 13:03:31 2024 00:42:05.802 read: IOPS=1230, BW=4921KiB/s (5040kB/s)(14.0MiB/2917msec) 00:42:05.802 slat (usec): min=2, max=11367, avg=11.00, stdev=189.61 00:42:05.802 clat (usec): min=183, max=41973, avg=794.31, stdev=4632.84 00:42:05.802 lat (usec): min=191, max=52674, avg=805.31, stdev=4665.38 00:42:05.802 clat percentiles (usec): 00:42:05.802 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:42:05.802 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:42:05.802 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 330], 95.00th=[ 351], 00:42:05.802 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:42:05.802 | 99.99th=[42206] 00:42:05.802 bw ( KiB/s): min= 96, max=13120, per=29.15%, avg=5704.00, stdev=5513.15, samples=5 00:42:05.802 iops : min= 24, max= 3280, avg=1426.00, stdev=1378.29, samples=5 00:42:05.802 lat (usec) : 250=60.28%, 500=38.11%, 750=0.11%, 1000=0.06% 00:42:05.802 lat (msec) : 2=0.06%, 4=0.03%, 20=0.03%, 50=1.31% 00:42:05.802 cpu : usr=0.51%, sys=1.03%, ctx=3593, majf=0, minf=2 00:42:05.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 issued rwts: total=3590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.802 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=653655: Mon Dec 16 13:03:31 2024 00:42:05.802 read: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(31.5MiB/2712msec) 00:42:05.802 slat (nsec): min=5587, max=32548, avg=7427.98, stdev=1347.02 00:42:05.802 clat (usec): min=191, max=41536, avg=324.75, stdev=1673.20 00:42:05.802 lat (usec): min=197, max=41560, avg=332.18, stdev=1673.75 00:42:05.802 clat percentiles (usec): 00:42:05.802 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:42:05.802 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:42:05.802 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 334], 00:42:05.802 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[41157], 99.95th=[41157], 00:42:05.802 | 99.99th=[41681] 00:42:05.802 bw ( KiB/s): min= 888, max=15760, per=60.08%, avg=11755.20, stdev=6222.70, samples=5 00:42:05.802 iops : min= 222, max= 3940, avg=2938.80, stdev=1555.68, samples=5 00:42:05.802 lat (usec) : 250=57.05%, 500=42.68%, 750=0.04% 00:42:05.802 lat (msec) : 2=0.02%, 4=0.02%, 50=0.17% 00:42:05.802 cpu : usr=0.77%, sys=2.77%, ctx=8060, majf=0, minf=1 00:42:05.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.802 issued rwts: total=8060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.802 00:42:05.802 Run status group 0 (all jobs): 00:42:05.802 READ: bw=19.1MiB/s (20.0MB/s), 125KiB/s-11.6MiB/s (128kB/s-12.2MB/s), io=63.4MiB (66.5MB), run=2712-3318msec 00:42:05.802 00:42:05.802 Disk stats (read/write): 00:42:05.802 nvme0n1: ios=4514/0, merge=0/0, ticks=3914/0, in_queue=3914, util=98.60% 00:42:05.802 nvme0n2: ios=137/0, merge=0/0, ticks=4156/0, in_queue=4156, util=98.61% 00:42:05.802 nvme0n3: ios=3633/0, merge=0/0, ticks=2890/0, in_queue=2890, util=98.65% 00:42:05.802 nvme0n4: ios=7717/0, merge=0/0, ticks=2483/0, in_queue=2483, util=96.44% 00:42:06.060 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.060 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:06.318 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.318 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:06.575 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.575 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:06.575 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.575 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:06.833 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:06.833 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 653514 00:42:06.833 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:06.833 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:07.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:07.091 nvmf hotplug test: fio failed as expected 00:42:07.091 13:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:07.091 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:07.091 rmmod nvme_tcp 00:42:07.349 rmmod nvme_fabrics 00:42:07.349 rmmod nvme_keyring 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 650978 ']' 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 650978 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 650978 ']' 00:42:07.349 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 650978 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 650978 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 650978' 00:42:07.350 killing process with pid 650978 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 650978 00:42:07.350 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 650978 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:07.609 13:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:09.515 00:42:09.515 real 0m25.798s 00:42:09.515 user 1m31.200s 00:42:09.515 sys 0m11.406s 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:09.515 ************************************ 00:42:09.515 END TEST nvmf_fio_target 00:42:09.515 ************************************ 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:09.515 ************************************ 00:42:09.515 START TEST nvmf_bdevio 00:42:09.515 ************************************ 00:42:09.515 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:09.774 * Looking for test storage... 00:42:09.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:09.774 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.775 --rc genhtml_branch_coverage=1 00:42:09.775 --rc genhtml_function_coverage=1 00:42:09.775 --rc genhtml_legend=1 00:42:09.775 --rc geninfo_all_blocks=1 00:42:09.775 --rc geninfo_unexecuted_blocks=1 00:42:09.775 00:42:09.775 ' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.775 --rc genhtml_branch_coverage=1 00:42:09.775 --rc genhtml_function_coverage=1 00:42:09.775 --rc genhtml_legend=1 00:42:09.775 --rc geninfo_all_blocks=1 00:42:09.775 --rc geninfo_unexecuted_blocks=1 00:42:09.775 00:42:09.775 ' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.775 --rc genhtml_branch_coverage=1 00:42:09.775 --rc genhtml_function_coverage=1 00:42:09.775 --rc genhtml_legend=1 00:42:09.775 --rc geninfo_all_blocks=1 00:42:09.775 --rc geninfo_unexecuted_blocks=1 00:42:09.775 00:42:09.775 ' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.775 --rc genhtml_branch_coverage=1 00:42:09.775 --rc genhtml_function_coverage=1 00:42:09.775 --rc genhtml_legend=1 00:42:09.775 --rc geninfo_all_blocks=1 00:42:09.775 --rc geninfo_unexecuted_blocks=1 00:42:09.775 00:42:09.775 ' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:09.775 13:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:16.345 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:16.345 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:16.346 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:16.346 Found net devices under 0000:af:00.0: cvl_0_0 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:16.346 Found net devices under 0000:af:00.1: cvl_0_1 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:16.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:16.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:42:16.346 00:42:16.346 --- 10.0.0.2 ping statistics --- 00:42:16.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:16.346 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:16.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:16.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:42:16.346 00:42:16.346 --- 10.0.0.1 ping statistics --- 00:42:16.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:16.346 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=657804 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 657804 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 657804 ']' 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:16.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:16.346 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.346 [2024-12-16 13:03:41.711080] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:16.346 [2024-12-16 13:03:41.712043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:16.346 [2024-12-16 13:03:41.712082] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:16.346 [2024-12-16 13:03:41.785681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:16.347 [2024-12-16 13:03:41.826307] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:16.347 [2024-12-16 13:03:41.826347] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:16.347 [2024-12-16 13:03:41.826354] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:16.347 [2024-12-16 13:03:41.826360] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:16.347 [2024-12-16 13:03:41.826365] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:16.347 [2024-12-16 13:03:41.826479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:42:16.347 [2024-12-16 13:03:41.826586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:42:16.347 [2024-12-16 13:03:41.826611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:42:16.347 [2024-12-16 13:03:41.826612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:16.347 [2024-12-16 13:03:41.899625] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:16.347 [2024-12-16 13:03:41.900081] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:16.347 [2024-12-16 13:03:41.900283] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:16.347 [2024-12-16 13:03:41.900664] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:16.347 [2024-12-16 13:03:41.901212] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.347 [2024-12-16 13:03:41.979434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.347 13:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.347 Malloc0 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.347 [2024-12-16 13:03:42.051684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:16.347 { 00:42:16.347 "params": { 00:42:16.347 "name": "Nvme$subsystem", 00:42:16.347 "trtype": "$TEST_TRANSPORT", 00:42:16.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:16.347 "adrfam": "ipv4", 00:42:16.347 "trsvcid": "$NVMF_PORT", 00:42:16.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:16.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:16.347 "hdgst": ${hdgst:-false}, 00:42:16.347 "ddgst": ${ddgst:-false} 00:42:16.347 }, 00:42:16.347 "method": "bdev_nvme_attach_controller" 00:42:16.347 } 00:42:16.347 EOF 00:42:16.347 )") 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:42:16.347 13:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:16.347 "params": { 00:42:16.347 "name": "Nvme1", 00:42:16.347 "trtype": "tcp", 00:42:16.347 "traddr": "10.0.0.2", 00:42:16.347 "adrfam": "ipv4", 00:42:16.347 "trsvcid": "4420", 00:42:16.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:16.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:16.347 "hdgst": false, 00:42:16.347 "ddgst": false 00:42:16.347 }, 00:42:16.347 "method": "bdev_nvme_attach_controller" 00:42:16.347 }' 00:42:16.347 [2024-12-16 13:03:42.105936] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:16.347 [2024-12-16 13:03:42.105988] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657876 ] 00:42:16.347 [2024-12-16 13:03:42.175921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:16.347 [2024-12-16 13:03:42.217320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.347 [2024-12-16 13:03:42.217426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.347 [2024-12-16 13:03:42.217426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:16.347 I/O targets: 00:42:16.347 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:16.347 00:42:16.347 00:42:16.347 CUnit - A unit testing framework for C - Version 2.1-3 00:42:16.347 http://cunit.sourceforge.net/ 00:42:16.347 00:42:16.347 00:42:16.347 Suite: bdevio tests on: Nvme1n1 00:42:16.604 Test: blockdev write read block ...passed 00:42:16.604 Test: blockdev write zeroes read block ...passed 00:42:16.604 Test: blockdev write zeroes read no split ...passed 00:42:16.604 Test: blockdev write zeroes read split ...passed 00:42:16.604 Test: blockdev write zeroes read split partial ...passed 00:42:16.604 Test: blockdev reset ...[2024-12-16 13:03:42.509527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:16.604 [2024-12-16 13:03:42.509588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f9c90 (9): Bad file descriptor 00:42:16.604 [2024-12-16 13:03:42.602134] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:16.604 passed 00:42:16.604 Test: blockdev write read 8 blocks ...passed 00:42:16.604 Test: blockdev write read size > 128k ...passed 00:42:16.604 Test: blockdev write read invalid size ...passed 00:42:16.862 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:16.862 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:16.862 Test: blockdev write read max offset ...passed 00:42:16.862 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:16.862 Test: blockdev writev readv 8 blocks ...passed 00:42:16.862 Test: blockdev writev readv 30 x 1block ...passed 00:42:16.862 Test: blockdev writev readv block ...passed 00:42:16.862 Test: blockdev writev readv size > 128k ...passed 00:42:16.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:16.862 Test: blockdev comparev and writev ...[2024-12-16 13:03:42.812046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.812077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.812091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.812098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.812415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.812428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.812440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.812447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.812736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.812747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.812759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.812765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.813058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.813068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.813079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.862 [2024-12-16 13:03:42.813087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:16.862 passed 00:42:16.862 Test: blockdev nvme passthru rw ...passed 00:42:16.862 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:03:42.895533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.862 [2024-12-16 13:03:42.895549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.895663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.862 [2024-12-16 13:03:42.895672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.895780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.862 [2024-12-16 13:03:42.895790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:16.862 [2024-12-16 13:03:42.895897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.862 [2024-12-16 13:03:42.895905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:16.862 passed 00:42:16.862 Test: blockdev nvme admin passthru ...passed 00:42:17.121 Test: blockdev copy ...passed 00:42:17.121 00:42:17.121 Run Summary: Type Total Ran Passed Failed Inactive 00:42:17.121 suites 1 1 n/a 0 0 00:42:17.121 tests 23 23 23 0 0 00:42:17.121 asserts 152 152 152 0 n/a 00:42:17.121 00:42:17.121 Elapsed time = 1.106 seconds 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:17.121 rmmod nvme_tcp 00:42:17.121 rmmod nvme_fabrics 00:42:17.121 rmmod nvme_keyring 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 657804 ']' 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 657804 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 657804 ']' 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 657804 00:42:17.121 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657804 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657804' 00:42:17.380 killing process with pid 657804 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 657804 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 657804 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:17.380 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:42:17.639 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:17.639 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:17.639 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:17.639 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:17.639 13:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.545 13:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:19.545 00:42:19.545 real 0m9.954s 00:42:19.545 user 0m8.645s 00:42:19.545 sys 0m5.133s 00:42:19.545 13:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:19.545 13:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.545 ************************************ 00:42:19.545 END TEST nvmf_bdevio 00:42:19.545 ************************************ 00:42:19.545 13:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:19.545 00:42:19.545 real 4m29.809s 00:42:19.545 user 9m1.458s 00:42:19.545 sys 1m52.238s 00:42:19.545 13:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:19.545 13:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:19.545 ************************************ 00:42:19.545 END TEST nvmf_target_core_interrupt_mode 00:42:19.545 ************************************ 00:42:19.545 13:03:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:19.545 13:03:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:19.545 13:03:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:19.545 13:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:19.545 ************************************ 00:42:19.545 START TEST nvmf_interrupt 00:42:19.545 ************************************ 00:42:19.545 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:19.804 * Looking for test storage... 00:42:19.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:19.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.804 --rc genhtml_branch_coverage=1 00:42:19.804 --rc genhtml_function_coverage=1 00:42:19.804 --rc genhtml_legend=1 00:42:19.804 --rc geninfo_all_blocks=1 00:42:19.804 --rc geninfo_unexecuted_blocks=1 00:42:19.804 00:42:19.804 ' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:19.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.804 --rc genhtml_branch_coverage=1 00:42:19.804 --rc genhtml_function_coverage=1 00:42:19.804 --rc genhtml_legend=1 00:42:19.804 --rc geninfo_all_blocks=1 00:42:19.804 --rc geninfo_unexecuted_blocks=1 00:42:19.804 00:42:19.804 ' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:19.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.804 --rc genhtml_branch_coverage=1 00:42:19.804 --rc genhtml_function_coverage=1 00:42:19.804 --rc genhtml_legend=1 00:42:19.804 --rc geninfo_all_blocks=1 00:42:19.804 --rc geninfo_unexecuted_blocks=1 00:42:19.804 00:42:19.804 ' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:19.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.804 --rc genhtml_branch_coverage=1 00:42:19.804 --rc genhtml_function_coverage=1 00:42:19.804 --rc genhtml_legend=1 00:42:19.804 --rc geninfo_all_blocks=1 00:42:19.804 --rc geninfo_unexecuted_blocks=1 00:42:19.804 00:42:19.804 ' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:19.804 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:19.805 13:03:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:26.380 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:26.380 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:26.380 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:26.381 Found net devices under 0000:af:00.0: cvl_0_0 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:26.381 Found net devices under 0000:af:00.1: cvl_0_1 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:26.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:26.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:42:26.381 00:42:26.381 --- 10.0.0.2 ping statistics --- 00:42:26.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.381 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:26.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:26.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:42:26.381 00:42:26.381 --- 10.0.0.1 ping statistics --- 00:42:26.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.381 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=661530 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 661530 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 661530 ']' 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:26.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.381 [2024-12-16 13:03:51.775314] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:26.381 [2024-12-16 13:03:51.776298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:26.381 [2024-12-16 13:03:51.776338] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:26.381 [2024-12-16 13:03:51.850558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:26.381 [2024-12-16 13:03:51.891244] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:26.381 [2024-12-16 13:03:51.891282] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:26.381 [2024-12-16 13:03:51.891289] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:26.381 [2024-12-16 13:03:51.891298] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:26.381 [2024-12-16 13:03:51.891303] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:26.381 [2024-12-16 13:03:51.891355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:26.381 [2024-12-16 13:03:51.891356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:26.381 [2024-12-16 13:03:51.952513] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:26.381 [2024-12-16 13:03:51.952788] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:26.381 [2024-12-16 13:03:51.953140] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:26.381 13:03:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:26.381 5000+0 records in 00:42:26.381 5000+0 records out 00:42:26.381 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0155207 s, 660 MB/s 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.381 AIO0 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:26.381 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.381 [2024-12-16 13:03:52.076198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.382 [2024-12-16 13:03:52.124551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 661530 0 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 661530 0 idle 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661530 root 20 0 128.2g 46080 33792 S 6.2 0.0 0:00.24 reactor_0' 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661530 root 20 0 128.2g 46080 33792 S 6.2 0.0 0:00.24 reactor_0 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 661530 1 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 661530 1 idle 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:26.382 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661534 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661534 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=661694 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 661530 0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 661530 0 busy 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661530 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.25 reactor_0' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661530 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.25 reactor_0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:26.641 13:03:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661530 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.56 reactor_0' 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661530 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.56 reactor_0 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 661530 1 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 661530 1 busy 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:28.015 13:03:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661534 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:01.35 reactor_1' 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661534 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:01.35 reactor_1 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.015 13:03:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 661694 00:42:37.982 Initializing NVMe Controllers 00:42:37.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:37.982 Controller IO queue size 256, less than required. 00:42:37.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:37.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:37.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:37.982 Initialization complete. Launching workers. 00:42:37.982 ======================================================== 00:42:37.983 Latency(us) 00:42:37.983 Device Information : IOPS MiB/s Average min max 00:42:37.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16822.35 65.71 15226.03 3014.04 29492.05 00:42:37.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16291.75 63.64 15718.37 7658.57 27440.30 00:42:37.983 ======================================================== 00:42:37.983 Total : 33114.10 129.35 15468.26 3014.04 29492.05 00:42:37.983 00:42:37.983 [2024-12-16 13:04:02.747511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de1a80 is same with the state(6) to be set 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 661530 0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 661530 0 idle 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661530 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.23 reactor_0' 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661530 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.23 reactor_0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 661530 1 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 661530 1 idle 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:37.983 13:04:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661534 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661534 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:37.983 13:04:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 661530 0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 661530 0 idle 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661530 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.46 reactor_0' 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661530 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.46 reactor_0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 661530 1 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 661530 1 idle 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=661530 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 661530 -w 256 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 661534 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.10 reactor_1' 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 661534 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.10 reactor_1 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:39.889 13:04:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:40.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:40.148 13:04:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:40.148 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:42:40.148 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:40.148 13:04:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:40.148 rmmod nvme_tcp 00:42:40.148 rmmod nvme_fabrics 00:42:40.148 rmmod nvme_keyring 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 661530 ']' 00:42:40.148 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 661530 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 661530 ']' 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 661530 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 661530 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 661530' 00:42:40.149 killing process with pid 661530 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 661530 00:42:40.149 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 661530 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:40.408 13:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.945 13:04:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:42.945 00:42:42.945 real 0m22.844s 00:42:42.945 user 0m39.721s 00:42:42.945 sys 0m8.355s 00:42:42.945 13:04:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:42.945 13:04:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:42.945 ************************************ 00:42:42.945 END TEST nvmf_interrupt 00:42:42.945 ************************************ 00:42:42.945 00:42:42.945 real 35m17.389s 00:42:42.945 user 86m24.176s 00:42:42.945 sys 10m19.806s 00:42:42.945 13:04:08 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:42.945 13:04:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.945 ************************************ 00:42:42.945 END TEST nvmf_tcp 00:42:42.945 ************************************ 00:42:42.945 13:04:08 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:42:42.945 13:04:08 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:42.945 13:04:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:42.945 13:04:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:42.945 13:04:08 -- common/autotest_common.sh@10 -- # set +x 00:42:42.945 ************************************ 00:42:42.945 START TEST spdkcli_nvmf_tcp 00:42:42.945 ************************************ 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:42.945 * Looking for test storage... 00:42:42.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:42.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.945 --rc genhtml_branch_coverage=1 00:42:42.945 --rc genhtml_function_coverage=1 00:42:42.945 --rc genhtml_legend=1 00:42:42.945 --rc geninfo_all_blocks=1 00:42:42.945 --rc geninfo_unexecuted_blocks=1 00:42:42.945 00:42:42.945 ' 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:42.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.945 --rc genhtml_branch_coverage=1 00:42:42.945 --rc genhtml_function_coverage=1 00:42:42.945 --rc genhtml_legend=1 00:42:42.945 --rc geninfo_all_blocks=1 00:42:42.945 --rc geninfo_unexecuted_blocks=1 00:42:42.945 00:42:42.945 ' 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:42.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.945 --rc genhtml_branch_coverage=1 00:42:42.945 --rc genhtml_function_coverage=1 00:42:42.945 --rc genhtml_legend=1 00:42:42.945 --rc geninfo_all_blocks=1 00:42:42.945 --rc geninfo_unexecuted_blocks=1 00:42:42.945 00:42:42.945 ' 00:42:42.945 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:42.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.945 --rc genhtml_branch_coverage=1 00:42:42.945 --rc genhtml_function_coverage=1 00:42:42.945 --rc genhtml_legend=1 00:42:42.945 --rc geninfo_all_blocks=1 00:42:42.946 --rc geninfo_unexecuted_blocks=1 00:42:42.946 00:42:42.946 ' 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:42.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=664366 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 664366 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 664366 ']' 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.946 [2024-12-16 13:04:08.775262] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:42.946 [2024-12-16 13:04:08.775314] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664366 ] 00:42:42.946 [2024-12-16 13:04:08.842111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:42.946 [2024-12-16 13:04:08.882267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:42.946 [2024-12-16 13:04:08.882269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:42.946 13:04:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.946 13:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:42.946 13:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:42.946 13:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:42.946 13:04:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:42.946 13:04:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.946 13:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:42.946 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:42.946 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:42.946 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:42.946 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:42.946 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:42.946 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:42.946 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:42.946 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:42.946 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:42.946 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:42.946 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:42.946 ' 00:42:46.234 [2024-12-16 13:04:11.741001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.170 [2024-12-16 13:04:13.081437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:49.703 [2024-12-16 13:04:15.569055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:52.236 [2024-12-16 13:04:17.727777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:53.613 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:53.613 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:53.613 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:53.613 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:53.613 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:53.613 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:53.613 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:53.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:53.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:53.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:53.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:53.613 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:53.613 13:04:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:53.883 13:04:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:54.191 13:04:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:54.191 13:04:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:54.191 13:04:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:54.191 13:04:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.191 13:04:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:54.191 13:04:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:54.191 13:04:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.191 13:04:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:54.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:54.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:54.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:54.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:54.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:54.191 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:54.191 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:54.191 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:54.191 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:54.191 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:54.191 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:54.191 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:54.191 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:54.191 ' 00:42:59.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:59.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:59.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:59.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:59.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:59.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:59.535 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:59.535 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:59.535 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:59.535 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:59.535 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:59.535 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:59.535 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:59.535 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 664366 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 664366 ']' 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 664366 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 664366 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 664366' 00:42:59.794 killing process with pid 664366 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 664366 00:42:59.794 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 664366 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 664366 ']' 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 664366 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 664366 ']' 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 664366 00:43:00.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (664366) - No such process 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 664366 is not found' 00:43:00.053 Process with pid 664366 is not found 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:00.053 00:43:00.053 real 0m17.401s 00:43:00.053 user 0m38.284s 00:43:00.053 sys 0m0.861s 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:00.053 13:04:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:00.053 ************************************ 00:43:00.053 END TEST spdkcli_nvmf_tcp 00:43:00.053 ************************************ 00:43:00.053 13:04:25 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:00.053 13:04:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:00.053 13:04:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:00.053 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:43:00.053 ************************************ 00:43:00.053 START TEST nvmf_identify_passthru 00:43:00.053 ************************************ 00:43:00.053 13:04:25 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:00.053 * Looking for test storage... 00:43:00.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:00.053 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:00.053 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:43:00.053 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:00.053 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:00.053 13:04:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:00.313 13:04:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:00.313 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:00.313 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:00.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.313 --rc genhtml_branch_coverage=1 00:43:00.313 --rc genhtml_function_coverage=1 00:43:00.313 --rc genhtml_legend=1 00:43:00.313 --rc geninfo_all_blocks=1 00:43:00.313 --rc geninfo_unexecuted_blocks=1 00:43:00.313 00:43:00.313 ' 00:43:00.313 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:00.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.313 --rc genhtml_branch_coverage=1 00:43:00.313 --rc genhtml_function_coverage=1 00:43:00.313 --rc genhtml_legend=1 00:43:00.313 --rc geninfo_all_blocks=1 00:43:00.313 --rc geninfo_unexecuted_blocks=1 00:43:00.313 00:43:00.313 ' 00:43:00.313 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:00.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.313 --rc genhtml_branch_coverage=1 00:43:00.313 --rc genhtml_function_coverage=1 00:43:00.313 --rc genhtml_legend=1 00:43:00.313 --rc geninfo_all_blocks=1 00:43:00.313 --rc geninfo_unexecuted_blocks=1 00:43:00.313 00:43:00.313 ' 00:43:00.313 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:00.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.313 --rc genhtml_branch_coverage=1 00:43:00.313 --rc genhtml_function_coverage=1 00:43:00.313 --rc genhtml_legend=1 00:43:00.313 --rc geninfo_all_blocks=1 00:43:00.313 --rc geninfo_unexecuted_blocks=1 00:43:00.313 00:43:00.313 ' 00:43:00.313 13:04:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:00.313 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:00.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:00.314 13:04:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:00.314 13:04:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:00.314 13:04:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.314 13:04:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:00.314 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:00.314 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:00.314 13:04:26 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:00.314 13:04:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:06.887 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:06.888 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:06.888 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:06.888 Found net devices under 0000:af:00.0: cvl_0_0 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:06.888 Found net devices under 0000:af:00.1: cvl_0_1 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:06.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:06.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:43:06.888 00:43:06.888 --- 10.0.0.2 ping statistics --- 00:43:06.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.888 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:06.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:06.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:43:06.888 00:43:06.888 --- 10.0.0.1 ping statistics --- 00:43:06.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.888 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:06.888 13:04:31 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:06.888 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.888 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:43:06.888 13:04:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:43:06.888 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:43:06.889 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:43:06.889 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:43:06.889 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:06.889 13:04:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:11.079 13:04:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:43:11.079 13:04:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:43:11.079 13:04:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:11.079 13:04:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=671269 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:14.369 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 671269 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 671269 ']' 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:14.369 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.627 [2024-12-16 13:04:40.482652] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:14.627 [2024-12-16 13:04:40.482703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:14.627 [2024-12-16 13:04:40.558776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:14.627 [2024-12-16 13:04:40.601261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:14.627 [2024-12-16 13:04:40.601305] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:14.627 [2024-12-16 13:04:40.601311] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:14.627 [2024-12-16 13:04:40.601318] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:14.627 [2024-12-16 13:04:40.601322] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:14.627 [2024-12-16 13:04:40.601364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:14.627 [2024-12-16 13:04:40.601471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:14.627 [2024-12-16 13:04:40.601488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:14.627 [2024-12-16 13:04:40.601494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:43:14.627 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.627 INFO: Log level set to 20 00:43:14.627 INFO: Requests: 00:43:14.627 { 00:43:14.627 "jsonrpc": "2.0", 00:43:14.627 "method": "nvmf_set_config", 00:43:14.627 "id": 1, 00:43:14.627 "params": { 00:43:14.627 "admin_cmd_passthru": { 00:43:14.627 "identify_ctrlr": true 00:43:14.627 } 00:43:14.627 } 00:43:14.627 } 00:43:14.627 00:43:14.627 INFO: response: 00:43:14.627 { 00:43:14.627 "jsonrpc": "2.0", 00:43:14.627 "id": 1, 00:43:14.627 "result": true 00:43:14.627 } 00:43:14.627 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.627 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.627 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.627 INFO: Setting log level to 20 00:43:14.627 INFO: Setting log level to 20 00:43:14.628 INFO: Log level set to 20 00:43:14.628 INFO: Log level set to 20 00:43:14.628 INFO: Requests: 00:43:14.628 { 00:43:14.628 "jsonrpc": "2.0", 00:43:14.628 "method": "framework_start_init", 00:43:14.628 "id": 1 00:43:14.628 } 00:43:14.628 00:43:14.628 INFO: Requests: 00:43:14.628 { 00:43:14.628 "jsonrpc": "2.0", 00:43:14.628 "method": "framework_start_init", 00:43:14.628 "id": 1 00:43:14.628 } 00:43:14.628 00:43:14.886 [2024-12-16 13:04:40.739471] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:14.886 INFO: response: 00:43:14.886 { 00:43:14.886 "jsonrpc": "2.0", 00:43:14.886 "id": 1, 00:43:14.886 "result": true 00:43:14.886 } 00:43:14.886 00:43:14.886 INFO: response: 00:43:14.886 { 00:43:14.886 "jsonrpc": "2.0", 00:43:14.886 "id": 1, 00:43:14.886 "result": true 00:43:14.886 } 00:43:14.886 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.886 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.886 INFO: Setting log level to 40 00:43:14.886 INFO: Setting log level to 40 00:43:14.886 INFO: Setting log level to 40 00:43:14.886 [2024-12-16 13:04:40.753002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.886 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.886 13:04:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.886 13:04:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 Nvme0n1 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 [2024-12-16 13:04:43.659502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 [ 00:43:18.167 { 00:43:18.167 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:18.167 "subtype": "Discovery", 00:43:18.167 "listen_addresses": [], 00:43:18.167 "allow_any_host": true, 00:43:18.167 "hosts": [] 00:43:18.167 }, 00:43:18.167 { 00:43:18.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:18.167 "subtype": "NVMe", 00:43:18.167 "listen_addresses": [ 00:43:18.167 { 00:43:18.167 "trtype": "TCP", 00:43:18.167 "adrfam": "IPv4", 00:43:18.167 "traddr": "10.0.0.2", 00:43:18.167 "trsvcid": "4420" 00:43:18.167 } 00:43:18.167 ], 00:43:18.167 "allow_any_host": true, 00:43:18.167 "hosts": [], 00:43:18.167 "serial_number": "SPDK00000000000001", 00:43:18.167 "model_number": "SPDK bdev Controller", 00:43:18.167 "max_namespaces": 1, 00:43:18.167 "min_cntlid": 1, 00:43:18.167 "max_cntlid": 65519, 00:43:18.167 "namespaces": [ 00:43:18.167 { 00:43:18.167 "nsid": 1, 00:43:18.167 "bdev_name": "Nvme0n1", 00:43:18.167 "name": "Nvme0n1", 00:43:18.167 "nguid": "2967F6B2C9774B3BAE117B043FE1D4B3", 00:43:18.167 "uuid": "2967f6b2-c977-4b3b-ae11-7b043fe1d4b3" 00:43:18.167 } 00:43:18.167 ] 00:43:18.167 } 00:43:18.167 ] 00:43:18.167 13:04:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:18.167 13:04:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:18.167 13:04:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:18.167 13:04:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:43:18.167 13:04:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:18.167 13:04:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:18.167 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.167 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.167 13:04:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:18.167 13:04:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:18.167 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:18.167 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:18.167 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:18.167 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:18.168 rmmod nvme_tcp 00:43:18.168 rmmod nvme_fabrics 00:43:18.168 rmmod nvme_keyring 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 671269 ']' 00:43:18.168 13:04:44 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 671269 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 671269 ']' 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 671269 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 671269 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 671269' 00:43:18.168 killing process with pid 671269 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 671269 00:43:18.168 13:04:44 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 671269 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:20.067 13:04:45 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:20.067 13:04:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:20.067 13:04:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.973 13:04:47 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:21.973 00:43:21.973 real 0m21.773s 00:43:21.973 user 0m27.687s 00:43:21.973 sys 0m5.271s 00:43:21.973 13:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:21.973 13:04:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.973 ************************************ 00:43:21.973 END TEST nvmf_identify_passthru 00:43:21.973 ************************************ 00:43:21.973 13:04:47 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:21.973 13:04:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:21.973 13:04:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:21.973 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:43:21.973 ************************************ 00:43:21.973 START TEST nvmf_dif 00:43:21.973 ************************************ 00:43:21.973 13:04:47 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:21.973 * Looking for test storage... 00:43:21.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:21.973 13:04:47 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:21.973 13:04:47 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:43:21.973 13:04:47 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:21.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.974 --rc genhtml_branch_coverage=1 00:43:21.974 --rc genhtml_function_coverage=1 00:43:21.974 --rc genhtml_legend=1 00:43:21.974 --rc geninfo_all_blocks=1 00:43:21.974 --rc geninfo_unexecuted_blocks=1 00:43:21.974 00:43:21.974 ' 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:21.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.974 --rc genhtml_branch_coverage=1 00:43:21.974 --rc genhtml_function_coverage=1 00:43:21.974 --rc genhtml_legend=1 00:43:21.974 --rc geninfo_all_blocks=1 00:43:21.974 --rc geninfo_unexecuted_blocks=1 00:43:21.974 00:43:21.974 ' 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:21.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.974 --rc genhtml_branch_coverage=1 00:43:21.974 --rc genhtml_function_coverage=1 00:43:21.974 --rc genhtml_legend=1 00:43:21.974 --rc geninfo_all_blocks=1 00:43:21.974 --rc geninfo_unexecuted_blocks=1 00:43:21.974 00:43:21.974 ' 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:21.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.974 --rc genhtml_branch_coverage=1 00:43:21.974 --rc genhtml_function_coverage=1 00:43:21.974 --rc genhtml_legend=1 00:43:21.974 --rc geninfo_all_blocks=1 00:43:21.974 --rc geninfo_unexecuted_blocks=1 00:43:21.974 00:43:21.974 ' 00:43:21.974 13:04:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.974 13:04:47 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.974 13:04:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.974 13:04:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.974 13:04:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.974 13:04:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:21.974 13:04:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:21.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:21.974 13:04:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:21.974 13:04:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:21.974 13:04:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:21.974 13:04:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:21.974 13:04:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:21.974 13:04:47 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:21.974 13:04:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:28.547 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:28.547 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:28.547 Found net devices under 0000:af:00.0: cvl_0_0 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:28.547 Found net devices under 0000:af:00.1: cvl_0_1 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:28.547 13:04:53 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:28.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:28.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:43:28.548 00:43:28.548 --- 10.0.0.2 ping statistics --- 00:43:28.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:28.548 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:28.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:28.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:43:28.548 00:43:28.548 --- 10.0.0.1 ping statistics --- 00:43:28.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:28.548 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:43:28.548 13:04:53 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:30.453 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:43:30.712 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:30.712 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:43:30.712 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:43:30.712 13:04:56 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:30.712 13:04:56 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:30.712 13:04:56 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:30.712 13:04:56 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:30.712 13:04:56 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:30.712 13:04:56 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:30.971 13:04:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:30.971 13:04:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:30.971 13:04:56 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.971 13:04:56 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=676768 00:43:30.971 13:04:56 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 676768 00:43:30.971 13:04:56 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 676768 ']' 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:30.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:30.971 13:04:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.971 [2024-12-16 13:04:56.857624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:30.971 [2024-12-16 13:04:56.857667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.971 [2024-12-16 13:04:56.926647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:30.971 [2024-12-16 13:04:56.965478] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:30.971 [2024-12-16 13:04:56.965518] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:30.971 [2024-12-16 13:04:56.965528] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:30.971 [2024-12-16 13:04:56.965534] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:30.971 [2024-12-16 13:04:56.965540] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:30.971 [2024-12-16 13:04:56.965557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:43:31.231 13:04:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 13:04:57 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:31.231 13:04:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:31.231 13:04:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 [2024-12-16 13:04:57.093935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.231 13:04:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 ************************************ 00:43:31.231 START TEST fio_dif_1_default 00:43:31.231 ************************************ 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 bdev_null0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.231 [2024-12-16 13:04:57.138179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:31.231 { 00:43:31.231 "params": { 00:43:31.231 "name": "Nvme$subsystem", 00:43:31.231 "trtype": "$TEST_TRANSPORT", 00:43:31.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.231 "adrfam": "ipv4", 00:43:31.231 "trsvcid": "$NVMF_PORT", 00:43:31.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.231 "hdgst": ${hdgst:-false}, 00:43:31.231 "ddgst": ${ddgst:-false} 00:43:31.231 }, 00:43:31.231 "method": "bdev_nvme_attach_controller" 00:43:31.231 } 00:43:31.231 EOF 00:43:31.231 )") 00:43:31.231 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:31.232 "params": { 00:43:31.232 "name": "Nvme0", 00:43:31.232 "trtype": "tcp", 00:43:31.232 "traddr": "10.0.0.2", 00:43:31.232 "adrfam": "ipv4", 00:43:31.232 "trsvcid": "4420", 00:43:31.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:31.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:31.232 "hdgst": false, 00:43:31.232 "ddgst": false 00:43:31.232 }, 00:43:31.232 "method": "bdev_nvme_attach_controller" 00:43:31.232 }' 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:31.232 13:04:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.490 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:31.490 fio-3.35 00:43:31.490 Starting 1 thread 00:43:43.698 00:43:43.699 filename0: (groupid=0, jobs=1): err= 0: pid=677017: Mon Dec 16 13:05:07 2024 00:43:43.699 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:43:43.699 slat (nsec): min=5783, max=25833, avg=6087.08, stdev=817.20 00:43:43.699 clat (usec): min=40808, max=45694, avg=41014.46, stdev=319.81 00:43:43.699 lat (usec): min=40814, max=45720, avg=41020.54, stdev=320.28 00:43:43.699 clat percentiles (usec): 00:43:43.699 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:43.699 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:43.699 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:43.699 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:43:43.699 | 99.99th=[45876] 00:43:43.699 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:43:43.699 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:43.699 lat (msec) : 50=100.00% 00:43:43.699 cpu : usr=92.20%, sys=7.55%, ctx=16, majf=0, minf=0 00:43:43.699 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:43.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.699 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:43.699 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:43.699 00:43:43.699 Run status group 0 (all jobs): 00:43:43.699 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 00:43:43.699 real 0m11.060s 00:43:43.699 user 0m15.847s 00:43:43.699 sys 0m1.035s 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 ************************************ 00:43:43.699 END TEST fio_dif_1_default 00:43:43.699 ************************************ 00:43:43.699 13:05:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:43.699 13:05:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:43.699 13:05:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 ************************************ 00:43:43.699 START TEST fio_dif_1_multi_subsystems 00:43:43.699 ************************************ 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 bdev_null0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 [2024-12-16 13:05:08.245415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 bdev_null1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:43.699 { 00:43:43.699 "params": { 00:43:43.699 "name": "Nvme$subsystem", 00:43:43.699 "trtype": "$TEST_TRANSPORT", 00:43:43.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:43.699 "adrfam": "ipv4", 00:43:43.699 "trsvcid": "$NVMF_PORT", 00:43:43.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:43.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:43.699 "hdgst": ${hdgst:-false}, 00:43:43.699 "ddgst": ${ddgst:-false} 00:43:43.699 }, 00:43:43.699 "method": "bdev_nvme_attach_controller" 00:43:43.699 } 00:43:43.699 EOF 00:43:43.699 )") 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:43.699 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:43.700 { 00:43:43.700 "params": { 00:43:43.700 "name": "Nvme$subsystem", 00:43:43.700 "trtype": "$TEST_TRANSPORT", 00:43:43.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:43.700 "adrfam": "ipv4", 00:43:43.700 "trsvcid": "$NVMF_PORT", 00:43:43.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:43.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:43.700 "hdgst": ${hdgst:-false}, 00:43:43.700 "ddgst": ${ddgst:-false} 00:43:43.700 }, 00:43:43.700 "method": "bdev_nvme_attach_controller" 00:43:43.700 } 00:43:43.700 EOF 00:43:43.700 )") 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:43.700 "params": { 00:43:43.700 "name": "Nvme0", 00:43:43.700 "trtype": "tcp", 00:43:43.700 "traddr": "10.0.0.2", 00:43:43.700 "adrfam": "ipv4", 00:43:43.700 "trsvcid": "4420", 00:43:43.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:43.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:43.700 "hdgst": false, 00:43:43.700 "ddgst": false 00:43:43.700 }, 00:43:43.700 "method": "bdev_nvme_attach_controller" 00:43:43.700 },{ 00:43:43.700 "params": { 00:43:43.700 "name": "Nvme1", 00:43:43.700 "trtype": "tcp", 00:43:43.700 "traddr": "10.0.0.2", 00:43:43.700 "adrfam": "ipv4", 00:43:43.700 "trsvcid": "4420", 00:43:43.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:43.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:43.700 "hdgst": false, 00:43:43.700 "ddgst": false 00:43:43.700 }, 00:43:43.700 "method": "bdev_nvme_attach_controller" 00:43:43.700 }' 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:43.700 13:05:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.700 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:43.700 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:43.700 fio-3.35 00:43:43.700 Starting 2 threads 00:43:53.687 00:43:53.687 filename0: (groupid=0, jobs=1): err= 0: pid=678924: Mon Dec 16 13:05:19 2024 00:43:53.687 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:43:53.687 slat (nsec): min=5929, max=28674, avg=7768.89, stdev=2621.79 00:43:53.687 clat (usec): min=40800, max=42016, avg=40991.99, stdev=122.34 00:43:53.687 lat (usec): min=40806, max=42028, avg=40999.75, stdev=122.85 00:43:53.687 clat percentiles (usec): 00:43:53.687 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:53.687 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.687 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:53.687 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:53.687 | 99.99th=[42206] 00:43:53.687 bw ( KiB/s): min= 384, max= 416, per=40.38%, avg=388.80, stdev=11.72, samples=20 00:43:53.687 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:53.687 lat (msec) : 50=100.00% 00:43:53.687 cpu : usr=96.41%, sys=3.35%, ctx=8, majf=0, minf=1 00:43:53.687 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.687 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.687 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:53.687 filename1: (groupid=0, jobs=1): err= 0: pid=678925: Mon Dec 16 13:05:19 2024 00:43:53.687 read: IOPS=142, BW=571KiB/s (585kB/s)(5712KiB/10006msec) 00:43:53.687 slat (nsec): min=5926, max=29493, avg=7306.72, stdev=2195.65 00:43:53.687 clat (usec): min=394, max=42490, avg=28005.84, stdev=19037.53 00:43:53.687 lat (usec): min=401, max=42497, avg=28013.14, stdev=19037.27 00:43:53.687 clat percentiles (usec): 00:43:53.687 | 1.00th=[ 461], 5.00th=[ 490], 10.00th=[ 578], 20.00th=[ 611], 00:43:53.687 | 30.00th=[ 627], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.687 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:53.687 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:43:53.687 | 99.99th=[42730] 00:43:53.687 bw ( KiB/s): min= 384, max= 768, per=59.32%, avg=570.95, stdev=187.89, samples=19 00:43:53.687 iops : min= 96, max= 192, avg=142.74, stdev=46.97, samples=19 00:43:53.687 lat (usec) : 500=6.93%, 750=25.56% 00:43:53.687 lat (msec) : 50=67.51% 00:43:53.687 cpu : usr=96.85%, sys=2.91%, ctx=13, majf=0, minf=0 00:43:53.687 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.687 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.687 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:53.687 00:43:53.687 Run status group 0 (all jobs): 00:43:53.687 READ: bw=961KiB/s (984kB/s), 390KiB/s-571KiB/s (399kB/s-585kB/s), io=9616KiB (9847kB), run=10006-10008msec 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 00:43:53.687 real 0m11.358s 00:43:53.687 user 0m26.493s 00:43:53.687 sys 0m0.933s 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 ************************************ 00:43:53.687 END TEST fio_dif_1_multi_subsystems 00:43:53.687 ************************************ 00:43:53.687 13:05:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:53.687 13:05:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:53.687 13:05:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 ************************************ 00:43:53.687 START TEST fio_dif_rand_params 00:43:53.687 ************************************ 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 bdev_null0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.687 [2024-12-16 13:05:19.648151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:53.687 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:53.687 { 00:43:53.687 "params": { 00:43:53.687 "name": "Nvme$subsystem", 00:43:53.687 "trtype": "$TEST_TRANSPORT", 00:43:53.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:53.687 "adrfam": "ipv4", 00:43:53.687 "trsvcid": "$NVMF_PORT", 00:43:53.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:53.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:53.687 "hdgst": ${hdgst:-false}, 00:43:53.687 "ddgst": ${ddgst:-false} 00:43:53.688 }, 00:43:53.688 "method": "bdev_nvme_attach_controller" 00:43:53.688 } 00:43:53.688 EOF 00:43:53.688 )") 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:53.688 "params": { 00:43:53.688 "name": "Nvme0", 00:43:53.688 "trtype": "tcp", 00:43:53.688 "traddr": "10.0.0.2", 00:43:53.688 "adrfam": "ipv4", 00:43:53.688 "trsvcid": "4420", 00:43:53.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:53.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:53.688 "hdgst": false, 00:43:53.688 "ddgst": false 00:43:53.688 }, 00:43:53.688 "method": "bdev_nvme_attach_controller" 00:43:53.688 }' 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:53.688 13:05:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:54.254 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:54.254 ... 00:43:54.254 fio-3.35 00:43:54.254 Starting 3 threads 00:43:59.528 00:43:59.528 filename0: (groupid=0, jobs=1): err= 0: pid=680827: Mon Dec 16 13:05:25 2024 00:43:59.528 read: IOPS=337, BW=42.2MiB/s (44.3MB/s)(213MiB/5047msec) 00:43:59.528 slat (nsec): min=6083, max=31127, avg=10686.81, stdev=1976.36 00:43:59.528 clat (usec): min=3204, max=51787, avg=8842.72, stdev=4698.27 00:43:59.528 lat (usec): min=3210, max=51796, avg=8853.41, stdev=4698.29 00:43:59.528 clat percentiles (usec): 00:43:59.528 | 1.00th=[ 5407], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7504], 00:43:59.528 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:43:59.528 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10159], 00:43:59.528 | 99.00th=[45876], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:43:59.528 | 99.99th=[51643] 00:43:59.528 bw ( KiB/s): min=32768, max=47616, per=35.56%, avg=43571.20, stdev=4418.92, samples=10 00:43:59.528 iops : min= 256, max= 372, avg=340.40, stdev=34.52, samples=10 00:43:59.528 lat (msec) : 4=0.47%, 10=92.96%, 20=5.22%, 50=1.17%, 100=0.18% 00:43:59.528 cpu : usr=94.00%, sys=5.71%, ctx=10, majf=0, minf=2 00:43:59.528 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.528 issued rwts: total=1705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:59.528 filename0: (groupid=0, jobs=1): err= 0: pid=680828: Mon Dec 16 13:05:25 2024 00:43:59.528 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(200MiB/5044msec) 00:43:59.528 slat (nsec): min=6134, max=25125, avg=11451.57, stdev=2128.08 00:43:59.528 clat (usec): min=3310, max=49264, avg=9417.31, stdev=4593.13 00:43:59.528 lat (usec): min=3317, max=49276, avg=9428.76, stdev=4593.33 00:43:59.528 clat percentiles (usec): 00:43:59.528 | 1.00th=[ 3654], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7963], 00:43:59.528 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:43:59.528 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076], 00:43:59.528 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:43:59.528 | 99.99th=[49021] 00:43:59.528 bw ( KiB/s): min=34048, max=43776, per=33.39%, avg=40908.80, stdev=3057.27, samples=10 00:43:59.528 iops : min= 266, max= 342, avg=319.60, stdev=23.88, samples=10 00:43:59.528 lat (msec) : 4=1.38%, 10=73.94%, 20=23.44%, 50=1.25% 00:43:59.528 cpu : usr=94.80%, sys=4.90%, ctx=7, majf=0, minf=10 00:43:59.529 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.529 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:59.529 filename0: (groupid=0, jobs=1): err= 0: pid=680829: Mon Dec 16 13:05:25 2024 00:43:59.529 read: IOPS=302, BW=37.8MiB/s (39.7MB/s)(191MiB/5044msec) 00:43:59.529 slat (nsec): min=6153, max=29335, avg=11279.60, stdev=2045.73 00:43:59.529 clat (usec): min=3085, max=50905, avg=9873.95, stdev=4971.52 00:43:59.529 lat (usec): min=3092, max=50918, avg=9885.23, stdev=4971.68 00:43:59.529 clat percentiles (usec): 00:43:59.529 | 1.00th=[ 3949], 5.00th=[ 6259], 10.00th=[ 7373], 20.00th=[ 8291], 00:43:59.529 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:43:59.529 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11600], 00:43:59.529 | 99.00th=[46924], 99.50th=[49546], 99.90th=[50070], 99.95th=[51119], 00:43:59.529 | 99.99th=[51119] 00:43:59.529 bw ( KiB/s): min=33792, max=42240, per=31.84%, avg=39014.40, stdev=3124.18, samples=10 00:43:59.529 iops : min= 264, max= 330, avg=304.80, stdev=24.41, samples=10 00:43:59.529 lat (msec) : 4=1.11%, 10=62.58%, 20=34.80%, 50=1.25%, 100=0.26% 00:43:59.529 cpu : usr=94.27%, sys=5.41%, ctx=8, majf=0, minf=9 00:43:59.529 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.529 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:59.529 00:43:59.529 Run status group 0 (all jobs): 00:43:59.529 READ: bw=120MiB/s (125MB/s), 37.8MiB/s-42.2MiB/s (39.7MB/s-44.3MB/s), io=604MiB (633MB), run=5044-5047msec 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.788 bdev_null0 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:59.788 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.047 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.047 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.047 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:00.047 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.047 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.047 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 [2024-12-16 13:05:25.871760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 bdev_null1 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 bdev_null2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:00.048 { 00:44:00.048 "params": { 00:44:00.048 "name": "Nvme$subsystem", 00:44:00.048 "trtype": "$TEST_TRANSPORT", 00:44:00.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:00.048 "adrfam": "ipv4", 00:44:00.048 "trsvcid": "$NVMF_PORT", 00:44:00.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:00.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:00.048 "hdgst": ${hdgst:-false}, 00:44:00.048 "ddgst": ${ddgst:-false} 00:44:00.048 }, 00:44:00.048 "method": "bdev_nvme_attach_controller" 00:44:00.048 } 00:44:00.048 EOF 00:44:00.048 )") 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:00.048 { 00:44:00.048 "params": { 00:44:00.048 "name": "Nvme$subsystem", 00:44:00.048 "trtype": "$TEST_TRANSPORT", 00:44:00.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:00.048 "adrfam": "ipv4", 00:44:00.048 "trsvcid": "$NVMF_PORT", 00:44:00.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:00.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:00.048 "hdgst": ${hdgst:-false}, 00:44:00.048 "ddgst": ${ddgst:-false} 00:44:00.048 }, 00:44:00.048 "method": "bdev_nvme_attach_controller" 00:44:00.048 } 00:44:00.048 EOF 00:44:00.048 )") 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:00.048 { 00:44:00.048 "params": { 00:44:00.048 "name": "Nvme$subsystem", 00:44:00.048 "trtype": "$TEST_TRANSPORT", 00:44:00.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:00.048 "adrfam": "ipv4", 00:44:00.048 "trsvcid": "$NVMF_PORT", 00:44:00.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:00.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:00.048 "hdgst": ${hdgst:-false}, 00:44:00.048 "ddgst": ${ddgst:-false} 00:44:00.048 }, 00:44:00.048 "method": "bdev_nvme_attach_controller" 00:44:00.048 } 00:44:00.048 EOF 00:44:00.048 )") 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:44:00.048 13:05:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:00.048 "params": { 00:44:00.048 "name": "Nvme0", 00:44:00.048 "trtype": "tcp", 00:44:00.048 "traddr": "10.0.0.2", 00:44:00.048 "adrfam": "ipv4", 00:44:00.048 "trsvcid": "4420", 00:44:00.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:00.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:00.048 "hdgst": false, 00:44:00.048 "ddgst": false 00:44:00.048 }, 00:44:00.048 "method": "bdev_nvme_attach_controller" 00:44:00.048 },{ 00:44:00.048 "params": { 00:44:00.048 "name": "Nvme1", 00:44:00.048 "trtype": "tcp", 00:44:00.048 "traddr": "10.0.0.2", 00:44:00.048 "adrfam": "ipv4", 00:44:00.048 "trsvcid": "4420", 00:44:00.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:00.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:00.048 "hdgst": false, 00:44:00.048 "ddgst": false 00:44:00.048 }, 00:44:00.048 "method": "bdev_nvme_attach_controller" 00:44:00.048 },{ 00:44:00.048 "params": { 00:44:00.048 "name": "Nvme2", 00:44:00.048 "trtype": "tcp", 00:44:00.048 "traddr": "10.0.0.2", 00:44:00.048 "adrfam": "ipv4", 00:44:00.049 "trsvcid": "4420", 00:44:00.049 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:00.049 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:00.049 "hdgst": false, 00:44:00.049 "ddgst": false 00:44:00.049 }, 00:44:00.049 "method": "bdev_nvme_attach_controller" 00:44:00.049 }' 00:44:00.049 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:00.049 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:00.049 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:00.049 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:00.049 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:00.049 13:05:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:00.049 13:05:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:00.049 13:05:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:00.049 13:05:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:00.049 13:05:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:00.307 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:00.307 ... 00:44:00.307 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:00.307 ... 00:44:00.307 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:00.307 ... 00:44:00.307 fio-3.35 00:44:00.307 Starting 24 threads 00:44:12.507 00:44:12.507 filename0: (groupid=0, jobs=1): err= 0: pid=681851: Mon Dec 16 13:05:37 2024 00:44:12.507 read: IOPS=70, BW=283KiB/s (289kB/s)(2872KiB/10159msec) 00:44:12.507 slat (nsec): min=7274, max=31047, avg=9813.72, stdev=2866.83 00:44:12.507 clat (msec): min=41, max=264, avg=226.04, stdev=49.41 00:44:12.507 lat (msec): min=41, max=264, avg=226.05, stdev=49.41 00:44:12.507 clat percentiles (msec): 00:44:12.507 | 1.00th=[ 43], 5.00th=[ 102], 10.00th=[ 188], 20.00th=[ 205], 00:44:12.507 | 30.00th=[ 218], 40.00th=[ 236], 50.00th=[ 249], 60.00th=[ 253], 00:44:12.507 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 259], 00:44:12.507 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:44:12.507 | 99.99th=[ 264] 00:44:12.507 bw ( KiB/s): min= 240, max= 512, per=4.72%, avg=280.80, stdev=64.52, samples=20 00:44:12.507 iops : min= 60, max= 128, avg=70.20, stdev=16.13, samples=20 00:44:12.507 lat (msec) : 50=2.23%, 100=2.23%, 250=50.70%, 500=44.85% 00:44:12.507 cpu : usr=98.54%, sys=1.07%, ctx=11, majf=0, minf=18 00:44:12.507 IO depths : 1=0.4%, 2=6.7%, 4=25.1%, 8=55.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:44:12.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.507 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.507 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.507 filename0: (groupid=0, jobs=1): err= 0: pid=681852: Mon Dec 16 13:05:37 2024 00:44:12.507 read: IOPS=62, BW=249KiB/s (255kB/s)(2520KiB/10119msec) 00:44:12.507 slat (nsec): min=7324, max=36991, avg=9765.96, stdev=3684.65 00:44:12.507 clat (msec): min=185, max=434, avg=256.50, stdev=49.84 00:44:12.507 lat (msec): min=185, max=434, avg=256.51, stdev=49.84 00:44:12.507 clat percentiles (msec): 00:44:12.507 | 1.00th=[ 186], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 218], 00:44:12.507 | 30.00th=[ 230], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 257], 00:44:12.507 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 313], 95.00th=[ 376], 00:44:12.507 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:44:12.507 | 99.99th=[ 435] 00:44:12.507 bw ( KiB/s): min= 128, max= 368, per=4.13%, avg=245.60, stdev=51.46, samples=20 00:44:12.507 iops : min= 32, max= 92, avg=61.40, stdev=12.87, samples=20 00:44:12.507 lat (msec) : 250=40.95%, 500=59.05% 00:44:12.507 cpu : usr=98.49%, sys=1.12%, ctx=13, majf=0, minf=20 00:44:12.507 IO depths : 1=1.0%, 2=2.5%, 4=10.5%, 8=74.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:44:12.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.507 complete : 0=0.0%, 4=89.8%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.507 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.507 filename0: (groupid=0, jobs=1): err= 0: pid=681853: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10094msec) 00:44:12.508 slat (nsec): min=6462, max=74912, avg=12475.79, stdev=9553.94 00:44:12.508 clat (msec): min=197, max=561, avg=360.39, stdev=57.34 00:44:12.508 lat (msec): min=197, max=561, avg=360.40, stdev=57.33 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 205], 5.00th=[ 236], 10.00th=[ 292], 20.00th=[ 321], 00:44:12.508 | 30.00th=[ 342], 40.00th=[ 363], 50.00th=[ 372], 60.00th=[ 384], 00:44:12.508 | 70.00th=[ 388], 80.00th=[ 397], 90.00th=[ 401], 95.00th=[ 430], 00:44:12.508 | 99.00th=[ 510], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 558], 00:44:12.508 | 99.99th=[ 558] 00:44:12.508 bw ( KiB/s): min= 112, max= 256, per=2.90%, avg=172.80, stdev=61.33, samples=20 00:44:12.508 iops : min= 28, max= 64, avg=43.20, stdev=15.33, samples=20 00:44:12.508 lat (msec) : 250=5.36%, 500=92.41%, 750=2.23% 00:44:12.508 cpu : usr=98.80%, sys=0.81%, ctx=17, majf=0, minf=13 00:44:12.508 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename0: (groupid=0, jobs=1): err= 0: pid=681854: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=62, BW=249KiB/s (255kB/s)(2528KiB/10135msec) 00:44:12.508 slat (nsec): min=5854, max=54695, avg=11819.06, stdev=7646.47 00:44:12.508 clat (msec): min=159, max=400, avg=255.68, stdev=55.41 00:44:12.508 lat (msec): min=159, max=400, avg=255.69, stdev=55.41 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 209], 20.00th=[ 213], 00:44:12.508 | 30.00th=[ 224], 40.00th=[ 228], 50.00th=[ 241], 60.00th=[ 257], 00:44:12.508 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 372], 95.00th=[ 384], 00:44:12.508 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:44:12.508 | 99.99th=[ 401] 00:44:12.508 bw ( KiB/s): min= 176, max= 336, per=4.14%, avg=246.40, stdev=44.78, samples=20 00:44:12.508 iops : min= 44, max= 84, avg=61.60, stdev=11.19, samples=20 00:44:12.508 lat (msec) : 250=52.85%, 500=47.15% 00:44:12.508 cpu : usr=98.74%, sys=0.83%, ctx=35, majf=0, minf=24 00:44:12.508 IO depths : 1=0.3%, 2=1.4%, 4=8.4%, 8=76.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=89.1%, 8=6.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename0: (groupid=0, jobs=1): err= 0: pid=681855: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10135msec) 00:44:12.508 slat (nsec): min=5434, max=20213, avg=8861.37, stdev=2037.84 00:44:12.508 clat (msec): min=152, max=393, avg=249.87, stdev=44.21 00:44:12.508 lat (msec): min=152, max=393, avg=249.88, stdev=44.21 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 167], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 211], 00:44:12.508 | 30.00th=[ 232], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:44:12.508 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 300], 95.00th=[ 372], 00:44:12.508 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:44:12.508 | 99.99th=[ 393] 00:44:12.508 bw ( KiB/s): min= 128, max= 336, per=4.25%, avg=252.00, stdev=49.49, samples=20 00:44:12.508 iops : min= 32, max= 84, avg=63.00, stdev=12.37, samples=20 00:44:12.508 lat (msec) : 250=42.41%, 500=57.59% 00:44:12.508 cpu : usr=98.68%, sys=0.93%, ctx=15, majf=0, minf=18 00:44:12.508 IO depths : 1=0.3%, 2=0.8%, 4=7.1%, 8=79.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=88.9%, 8=6.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename0: (groupid=0, jobs=1): err= 0: pid=681856: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=63, BW=256KiB/s (262kB/s)(2592KiB/10142msec) 00:44:12.508 slat (nsec): min=5919, max=35190, avg=10387.88, stdev=4335.97 00:44:12.508 clat (msec): min=177, max=398, avg=249.81, stdev=44.29 00:44:12.508 lat (msec): min=177, max=398, avg=249.82, stdev=44.28 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 215], 00:44:12.508 | 30.00th=[ 222], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 257], 00:44:12.508 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 292], 95.00th=[ 376], 00:44:12.508 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:44:12.508 | 99.99th=[ 401] 00:44:12.508 bw ( KiB/s): min= 176, max= 304, per=4.25%, avg=252.80, stdev=36.19, samples=20 00:44:12.508 iops : min= 44, max= 76, avg=63.20, stdev= 9.05, samples=20 00:44:12.508 lat (msec) : 250=49.07%, 500=50.93% 00:44:12.508 cpu : usr=98.57%, sys=1.05%, ctx=14, majf=0, minf=37 00:44:12.508 IO depths : 1=0.5%, 2=1.5%, 4=8.8%, 8=76.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=89.3%, 8=5.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename0: (groupid=0, jobs=1): err= 0: pid=681857: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10124msec) 00:44:12.508 slat (nsec): min=6969, max=27000, avg=8972.73, stdev=2158.05 00:44:12.508 clat (msec): min=165, max=426, avg=249.68, stdev=44.80 00:44:12.508 lat (msec): min=165, max=426, avg=249.69, stdev=44.80 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 171], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 211], 00:44:12.508 | 30.00th=[ 222], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 257], 00:44:12.508 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 300], 95.00th=[ 372], 00:44:12.508 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 426], 99.95th=[ 426], 00:44:12.508 | 99.99th=[ 426] 00:44:12.508 bw ( KiB/s): min= 128, max= 336, per=4.25%, avg=252.00, stdev=45.22, samples=20 00:44:12.508 iops : min= 32, max= 84, avg=63.00, stdev=11.30, samples=20 00:44:12.508 lat (msec) : 250=42.11%, 500=57.89% 00:44:12.508 cpu : usr=98.62%, sys=0.99%, ctx=16, majf=0, minf=23 00:44:12.508 IO depths : 1=0.2%, 2=0.8%, 4=7.6%, 8=78.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=89.0%, 8=5.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename0: (groupid=0, jobs=1): err= 0: pid=681858: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10142msec) 00:44:12.508 slat (nsec): min=5958, max=27149, avg=9650.97, stdev=3035.64 00:44:12.508 clat (msec): min=159, max=264, avg=236.22, stdev=25.95 00:44:12.508 lat (msec): min=159, max=264, avg=236.23, stdev=25.95 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 199], 20.00th=[ 207], 00:44:12.508 | 30.00th=[ 222], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 253], 00:44:12.508 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 259], 95.00th=[ 259], 00:44:12.508 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:44:12.508 | 99.99th=[ 266] 00:44:12.508 bw ( KiB/s): min= 240, max= 368, per=4.52%, avg=268.00, stdev=34.77, samples=20 00:44:12.508 iops : min= 60, max= 92, avg=67.00, stdev= 8.69, samples=20 00:44:12.508 lat (msec) : 250=53.06%, 500=46.94% 00:44:12.508 cpu : usr=98.84%, sys=0.77%, ctx=14, majf=0, minf=22 00:44:12.508 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=56.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename1: (groupid=0, jobs=1): err= 0: pid=681859: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=60, BW=243KiB/s (249kB/s)(2456KiB/10120msec) 00:44:12.508 slat (nsec): min=7321, max=30219, avg=9540.50, stdev=2992.54 00:44:12.508 clat (msec): min=183, max=434, avg=263.23, stdev=59.34 00:44:12.508 lat (msec): min=183, max=434, avg=263.24, stdev=59.34 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 184], 5.00th=[ 203], 10.00th=[ 205], 20.00th=[ 218], 00:44:12.508 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 249], 60.00th=[ 257], 00:44:12.508 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 376], 95.00th=[ 393], 00:44:12.508 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:44:12.508 | 99.99th=[ 435] 00:44:12.508 bw ( KiB/s): min= 128, max= 304, per=4.03%, avg=239.20, stdev=44.19, samples=20 00:44:12.508 iops : min= 32, max= 76, avg=59.80, stdev=11.05, samples=20 00:44:12.508 lat (msec) : 250=50.49%, 500=49.51% 00:44:12.508 cpu : usr=98.54%, sys=1.07%, ctx=14, majf=0, minf=24 00:44:12.508 IO depths : 1=0.5%, 2=1.8%, 4=9.1%, 8=75.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.508 filename1: (groupid=0, jobs=1): err= 0: pid=681860: Mon Dec 16 13:05:37 2024 00:44:12.508 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10158msec) 00:44:12.508 slat (nsec): min=7228, max=31502, avg=10121.01, stdev=3275.78 00:44:12.508 clat (msec): min=41, max=271, avg=230.38, stdev=45.98 00:44:12.508 lat (msec): min=41, max=271, avg=230.39, stdev=45.98 00:44:12.508 clat percentiles (msec): 00:44:12.508 | 1.00th=[ 43], 5.00th=[ 157], 10.00th=[ 201], 20.00th=[ 213], 00:44:12.508 | 30.00th=[ 222], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 253], 00:44:12.508 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 262], 00:44:12.508 | 99.00th=[ 262], 99.50th=[ 264], 99.90th=[ 271], 99.95th=[ 271], 00:44:12.508 | 99.99th=[ 271] 00:44:12.508 bw ( KiB/s): min= 256, max= 512, per=4.63%, avg=275.20, stdev=62.64, samples=20 00:44:12.508 iops : min= 64, max= 128, avg=68.80, stdev=15.66, samples=20 00:44:12.508 lat (msec) : 50=1.99%, 100=2.84%, 250=40.91%, 500=54.26% 00:44:12.508 cpu : usr=98.71%, sys=0.92%, ctx=14, majf=0, minf=25 00:44:12.508 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:44:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.508 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename1: (groupid=0, jobs=1): err= 0: pid=681861: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=63, BW=254KiB/s (261kB/s)(2576KiB/10124msec) 00:44:12.509 slat (nsec): min=7331, max=19061, avg=9029.16, stdev=2167.70 00:44:12.509 clat (msec): min=154, max=430, avg=250.89, stdev=41.15 00:44:12.509 lat (msec): min=154, max=430, avg=250.90, stdev=41.15 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 184], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 218], 00:44:12.509 | 30.00th=[ 236], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 257], 00:44:12.509 | 70.00th=[ 259], 80.00th=[ 259], 90.00th=[ 296], 95.00th=[ 368], 00:44:12.509 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 430], 99.95th=[ 430], 00:44:12.509 | 99.99th=[ 430] 00:44:12.509 bw ( KiB/s): min= 128, max= 336, per=4.23%, avg=251.20, stdev=46.17, samples=20 00:44:12.509 iops : min= 32, max= 84, avg=62.80, stdev=11.54, samples=20 00:44:12.509 lat (msec) : 250=43.48%, 500=56.52% 00:44:12.509 cpu : usr=98.48%, sys=1.15%, ctx=14, majf=0, minf=21 00:44:12.509 IO depths : 1=0.6%, 2=1.6%, 4=8.7%, 8=77.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=89.3%, 8=5.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename1: (groupid=0, jobs=1): err= 0: pid=681862: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=66, BW=264KiB/s (271kB/s)(2680KiB/10135msec) 00:44:12.509 slat (nsec): min=7046, max=60453, avg=11055.40, stdev=6723.09 00:44:12.509 clat (msec): min=159, max=294, avg=241.69, stdev=25.06 00:44:12.509 lat (msec): min=159, max=294, avg=241.70, stdev=25.06 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 159], 5.00th=[ 197], 10.00th=[ 213], 20.00th=[ 218], 00:44:12.509 | 30.00th=[ 232], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:44:12.509 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 259], 95.00th=[ 266], 00:44:12.509 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 296], 99.95th=[ 296], 00:44:12.509 | 99.99th=[ 296] 00:44:12.509 bw ( KiB/s): min= 240, max= 368, per=4.40%, avg=261.60, stdev=25.58, samples=20 00:44:12.509 iops : min= 60, max= 92, avg=65.40, stdev= 6.39, samples=20 00:44:12.509 lat (msec) : 250=46.87%, 500=53.13% 00:44:12.509 cpu : usr=98.67%, sys=0.94%, ctx=9, majf=0, minf=22 00:44:12.509 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename1: (groupid=0, jobs=1): err= 0: pid=681863: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=44, BW=176KiB/s (181kB/s)(1784KiB/10115msec) 00:44:12.509 slat (nsec): min=6341, max=24730, avg=9205.46, stdev=2640.65 00:44:12.509 clat (msec): min=158, max=562, avg=362.67, stdev=74.50 00:44:12.509 lat (msec): min=158, max=562, avg=362.68, stdev=74.50 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 159], 5.00th=[ 241], 10.00th=[ 279], 20.00th=[ 309], 00:44:12.509 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 376], 60.00th=[ 380], 00:44:12.509 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 397], 95.00th=[ 514], 00:44:12.509 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:44:12.509 | 99.99th=[ 567] 00:44:12.509 bw ( KiB/s): min= 112, max= 256, per=3.05%, avg=181.05, stdev=59.40, samples=19 00:44:12.509 iops : min= 28, max= 64, avg=45.26, stdev=14.85, samples=19 00:44:12.509 lat (msec) : 250=5.83%, 500=87.00%, 750=7.17% 00:44:12.509 cpu : usr=98.68%, sys=0.92%, ctx=14, majf=0, minf=20 00:44:12.509 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename1: (groupid=0, jobs=1): err= 0: pid=681864: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=61, BW=245KiB/s (251kB/s)(2480KiB/10124msec) 00:44:12.509 slat (nsec): min=4536, max=26153, avg=9074.86, stdev=2405.17 00:44:12.509 clat (msec): min=167, max=549, avg=260.75, stdev=46.59 00:44:12.509 lat (msec): min=167, max=549, avg=260.76, stdev=46.59 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 197], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 241], 00:44:12.509 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:44:12.509 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 309], 95.00th=[ 355], 00:44:12.509 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 550], 99.95th=[ 550], 00:44:12.509 | 99.99th=[ 550] 00:44:12.509 bw ( KiB/s): min= 112, max= 304, per=4.06%, avg=241.60, stdev=46.98, samples=20 00:44:12.509 iops : min= 28, max= 76, avg=60.40, stdev=11.74, samples=20 00:44:12.509 lat (msec) : 250=29.68%, 500=70.00%, 750=0.32% 00:44:12.509 cpu : usr=98.58%, sys=1.03%, ctx=13, majf=0, minf=19 00:44:12.509 IO depths : 1=0.6%, 2=2.3%, 4=11.0%, 8=74.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=90.1%, 8=4.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename1: (groupid=0, jobs=1): err= 0: pid=681865: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=64, BW=260KiB/s (266kB/s)(2632KiB/10142msec) 00:44:12.509 slat (nsec): min=4186, max=56143, avg=10980.24, stdev=6660.23 00:44:12.509 clat (msec): min=169, max=432, avg=246.25, stdev=33.93 00:44:12.509 lat (msec): min=169, max=432, avg=246.27, stdev=33.93 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 169], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 213], 00:44:12.509 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:44:12.509 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 262], 95.00th=[ 300], 00:44:12.509 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:44:12.509 | 99.99th=[ 435] 00:44:12.509 bw ( KiB/s): min= 128, max= 384, per=4.31%, avg=256.80, stdev=50.72, samples=20 00:44:12.509 iops : min= 32, max= 96, avg=64.20, stdev=12.68, samples=20 00:44:12.509 lat (msec) : 250=41.03%, 500=58.97% 00:44:12.509 cpu : usr=98.74%, sys=0.88%, ctx=13, majf=0, minf=28 00:44:12.509 IO depths : 1=1.2%, 2=2.7%, 4=10.6%, 8=74.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=89.9%, 8=4.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename1: (groupid=0, jobs=1): err= 0: pid=681866: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10115msec) 00:44:12.509 slat (nsec): min=6575, max=30264, avg=9734.69, stdev=3362.17 00:44:12.509 clat (msec): min=198, max=562, avg=361.12, stdev=50.43 00:44:12.509 lat (msec): min=198, max=562, avg=361.13, stdev=50.43 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 253], 5.00th=[ 279], 10.00th=[ 292], 20.00th=[ 317], 00:44:12.509 | 30.00th=[ 338], 40.00th=[ 363], 50.00th=[ 376], 60.00th=[ 380], 00:44:12.509 | 70.00th=[ 384], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 430], 00:44:12.509 | 99.00th=[ 514], 99.50th=[ 518], 99.90th=[ 567], 99.95th=[ 567], 00:44:12.509 | 99.99th=[ 567] 00:44:12.509 bw ( KiB/s): min= 112, max= 256, per=2.90%, avg=172.80, stdev=58.41, samples=20 00:44:12.509 iops : min= 28, max= 64, avg=43.20, stdev=14.60, samples=20 00:44:12.509 lat (msec) : 250=0.89%, 500=95.98%, 750=3.12% 00:44:12.509 cpu : usr=98.75%, sys=0.86%, ctx=14, majf=0, minf=25 00:44:12.509 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename2: (groupid=0, jobs=1): err= 0: pid=681867: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=65, BW=261KiB/s (267kB/s)(2648KiB/10142msec) 00:44:12.509 slat (nsec): min=4220, max=58639, avg=11351.70, stdev=7343.52 00:44:12.509 clat (msec): min=181, max=352, avg=244.60, stdev=26.01 00:44:12.509 lat (msec): min=181, max=352, avg=244.61, stdev=26.01 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 213], 20.00th=[ 218], 00:44:12.509 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 255], 00:44:12.509 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 262], 95.00th=[ 275], 00:44:12.509 | 99.00th=[ 309], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:44:12.509 | 99.99th=[ 351] 00:44:12.509 bw ( KiB/s): min= 208, max= 336, per=4.35%, avg=258.40, stdev=25.04, samples=20 00:44:12.509 iops : min= 52, max= 84, avg=64.60, stdev= 6.26, samples=20 00:44:12.509 lat (msec) : 250=42.90%, 500=57.10% 00:44:12.509 cpu : usr=98.44%, sys=1.16%, ctx=19, majf=0, minf=18 00:44:12.509 IO depths : 1=1.1%, 2=2.7%, 4=11.2%, 8=73.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=90.1%, 8=4.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.509 filename2: (groupid=0, jobs=1): err= 0: pid=681868: Mon Dec 16 13:05:37 2024 00:44:12.509 read: IOPS=64, BW=257KiB/s (263kB/s)(2600KiB/10134msec) 00:44:12.509 slat (nsec): min=7308, max=26332, avg=8923.21, stdev=1923.40 00:44:12.509 clat (msec): min=168, max=431, avg=248.34, stdev=42.27 00:44:12.509 lat (msec): min=168, max=431, avg=248.35, stdev=42.27 00:44:12.509 clat percentiles (msec): 00:44:12.509 | 1.00th=[ 169], 5.00th=[ 197], 10.00th=[ 201], 20.00th=[ 211], 00:44:12.509 | 30.00th=[ 224], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:44:12.509 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 363], 00:44:12.509 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 430], 99.95th=[ 430], 00:44:12.509 | 99.99th=[ 430] 00:44:12.509 bw ( KiB/s): min= 128, max= 368, per=4.26%, avg=253.60, stdev=47.09, samples=20 00:44:12.509 iops : min= 32, max= 92, avg=63.40, stdev=11.77, samples=20 00:44:12.509 lat (msec) : 250=39.38%, 500=60.62% 00:44:12.509 cpu : usr=98.70%, sys=0.91%, ctx=14, majf=0, minf=26 00:44:12.509 IO depths : 1=0.2%, 2=0.9%, 4=8.2%, 8=78.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:44:12.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.509 complete : 0=0.0%, 4=89.2%, 8=5.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 filename2: (groupid=0, jobs=1): err= 0: pid=681869: Mon Dec 16 13:05:37 2024 00:44:12.510 read: IOPS=64, BW=260KiB/s (266kB/s)(2632KiB/10134msec) 00:44:12.510 slat (nsec): min=7284, max=54608, avg=11160.70, stdev=6424.64 00:44:12.510 clat (msec): min=172, max=389, avg=246.14, stdev=31.23 00:44:12.510 lat (msec): min=172, max=389, avg=246.15, stdev=31.23 00:44:12.510 clat percentiles (msec): 00:44:12.510 | 1.00th=[ 174], 5.00th=[ 197], 10.00th=[ 207], 20.00th=[ 213], 00:44:12.510 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:44:12.510 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 266], 95.00th=[ 296], 00:44:12.510 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:44:12.510 | 99.99th=[ 388] 00:44:12.510 bw ( KiB/s): min= 144, max= 368, per=4.31%, avg=256.80, stdev=41.68, samples=20 00:44:12.510 iops : min= 36, max= 92, avg=64.20, stdev=10.42, samples=20 00:44:12.510 lat (msec) : 250=40.12%, 500=59.88% 00:44:12.510 cpu : usr=98.63%, sys=0.98%, ctx=15, majf=0, minf=22 00:44:12.510 IO depths : 1=0.6%, 2=2.0%, 4=10.2%, 8=75.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:44:12.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 complete : 0=0.0%, 4=89.8%, 8=4.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 filename2: (groupid=0, jobs=1): err= 0: pid=681870: Mon Dec 16 13:05:37 2024 00:44:12.510 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10124msec) 00:44:12.510 slat (nsec): min=7344, max=33122, avg=9827.47, stdev=3687.63 00:44:12.510 clat (msec): min=182, max=438, avg=250.17, stdev=40.03 00:44:12.510 lat (msec): min=182, max=438, avg=250.18, stdev=40.03 00:44:12.510 clat percentiles (msec): 00:44:12.510 | 1.00th=[ 188], 5.00th=[ 197], 10.00th=[ 213], 20.00th=[ 222], 00:44:12.510 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:44:12.510 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 264], 95.00th=[ 317], 00:44:12.510 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:44:12.510 | 99.99th=[ 439] 00:44:12.510 bw ( KiB/s): min= 128, max= 368, per=4.25%, avg=252.00, stdev=47.26, samples=20 00:44:12.510 iops : min= 32, max= 92, avg=63.00, stdev=11.81, samples=20 00:44:12.510 lat (msec) : 250=39.94%, 500=60.06% 00:44:12.510 cpu : usr=98.85%, sys=0.75%, ctx=19, majf=0, minf=33 00:44:12.510 IO depths : 1=0.6%, 2=2.2%, 4=10.8%, 8=74.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:44:12.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 complete : 0=0.0%, 4=90.1%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 filename2: (groupid=0, jobs=1): err= 0: pid=681871: Mon Dec 16 13:05:37 2024 00:44:12.510 read: IOPS=66, BW=267KiB/s (273kB/s)(2704KiB/10143msec) 00:44:12.510 slat (nsec): min=7341, max=28868, avg=10670.37, stdev=4039.30 00:44:12.510 clat (msec): min=76, max=386, avg=239.24, stdev=47.10 00:44:12.510 lat (msec): min=76, max=386, avg=239.25, stdev=47.10 00:44:12.510 clat percentiles (msec): 00:44:12.510 | 1.00th=[ 77], 5.00th=[ 161], 10.00th=[ 197], 20.00th=[ 211], 00:44:12.510 | 30.00th=[ 218], 40.00th=[ 236], 50.00th=[ 249], 60.00th=[ 255], 00:44:12.510 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 266], 95.00th=[ 317], 00:44:12.510 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:44:12.510 | 99.99th=[ 388] 00:44:12.510 bw ( KiB/s): min= 176, max= 368, per=4.43%, avg=264.00, stdev=47.44, samples=20 00:44:12.510 iops : min= 44, max= 92, avg=66.00, stdev=11.86, samples=20 00:44:12.510 lat (msec) : 100=2.07%, 250=50.30%, 500=47.63% 00:44:12.510 cpu : usr=98.71%, sys=0.88%, ctx=16, majf=0, minf=24 00:44:12.510 IO depths : 1=0.1%, 2=1.9%, 4=11.2%, 8=74.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:44:12.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 complete : 0=0.0%, 4=90.2%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 filename2: (groupid=0, jobs=1): err= 0: pid=681872: Mon Dec 16 13:05:37 2024 00:44:12.510 read: IOPS=60, BW=242KiB/s (248kB/s)(2456KiB/10131msec) 00:44:12.510 slat (nsec): min=6159, max=33914, avg=10313.60, stdev=5528.22 00:44:12.510 clat (msec): min=180, max=445, avg=263.73, stdev=59.45 00:44:12.510 lat (msec): min=180, max=445, avg=263.74, stdev=59.45 00:44:12.510 clat percentiles (msec): 00:44:12.510 | 1.00th=[ 188], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 222], 00:44:12.510 | 30.00th=[ 232], 40.00th=[ 239], 50.00th=[ 255], 60.00th=[ 259], 00:44:12.510 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 376], 95.00th=[ 393], 00:44:12.510 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:44:12.510 | 99.99th=[ 447] 00:44:12.510 bw ( KiB/s): min= 128, max= 336, per=4.03%, avg=239.20, stdev=46.86, samples=20 00:44:12.510 iops : min= 32, max= 84, avg=59.80, stdev=11.71, samples=20 00:44:12.510 lat (msec) : 250=46.91%, 500=53.09% 00:44:12.510 cpu : usr=98.81%, sys=0.82%, ctx=20, majf=0, minf=26 00:44:12.510 IO depths : 1=0.7%, 2=2.1%, 4=9.6%, 8=75.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:44:12.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 filename2: (groupid=0, jobs=1): err= 0: pid=681873: Mon Dec 16 13:05:37 2024 00:44:12.510 read: IOPS=68, BW=275KiB/s (282kB/s)(2800KiB/10165msec) 00:44:12.510 slat (nsec): min=6832, max=76112, avg=18242.31, stdev=5976.57 00:44:12.510 clat (msec): min=40, max=331, avg=231.44, stdev=50.35 00:44:12.510 lat (msec): min=40, max=331, avg=231.46, stdev=50.35 00:44:12.510 clat percentiles (msec): 00:44:12.510 | 1.00th=[ 41], 5.00th=[ 111], 10.00th=[ 197], 20.00th=[ 213], 00:44:12.510 | 30.00th=[ 230], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:44:12.510 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 264], 00:44:12.510 | 99.00th=[ 284], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:44:12.510 | 99.99th=[ 334] 00:44:12.510 bw ( KiB/s): min= 176, max= 512, per=4.60%, avg=273.60, stdev=65.64, samples=20 00:44:12.510 iops : min= 44, max= 128, avg=68.40, stdev=16.41, samples=20 00:44:12.510 lat (msec) : 50=2.29%, 100=2.29%, 250=41.71%, 500=53.71% 00:44:12.510 cpu : usr=98.48%, sys=1.12%, ctx=6, majf=0, minf=29 00:44:12.510 IO depths : 1=0.7%, 2=2.1%, 4=10.4%, 8=74.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:44:12.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 complete : 0=0.0%, 4=89.9%, 8=4.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 filename2: (groupid=0, jobs=1): err= 0: pid=681874: Mon Dec 16 13:05:37 2024 00:44:12.510 read: IOPS=61, BW=245KiB/s (251kB/s)(2480KiB/10119msec) 00:44:12.510 slat (nsec): min=7314, max=67000, avg=12019.36, stdev=8049.59 00:44:12.510 clat (msec): min=182, max=429, avg=260.38, stdev=42.13 00:44:12.510 lat (msec): min=182, max=429, avg=260.39, stdev=42.14 00:44:12.510 clat percentiles (msec): 00:44:12.510 | 1.00th=[ 205], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 239], 00:44:12.510 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 255], 60.00th=[ 257], 00:44:12.510 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 313], 95.00th=[ 347], 00:44:12.510 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:44:12.510 | 99.99th=[ 430] 00:44:12.510 bw ( KiB/s): min= 128, max= 384, per=4.06%, avg=241.60, stdev=54.17, samples=20 00:44:12.510 iops : min= 32, max= 96, avg=60.40, stdev=13.54, samples=20 00:44:12.510 lat (msec) : 250=28.06%, 500=71.94% 00:44:12.510 cpu : usr=98.66%, sys=0.94%, ctx=23, majf=0, minf=28 00:44:12.510 IO depths : 1=2.3%, 2=5.2%, 4=14.8%, 8=67.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:44:12.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 complete : 0=0.0%, 4=91.1%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.510 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:12.510 00:44:12.510 Run status group 0 (all jobs): 00:44:12.510 READ: bw=5936KiB/s (6078kB/s), 176KiB/s-283KiB/s (181kB/s-289kB/s), io=58.9MiB (61.8MB), run=10094-10165msec 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:12.510 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 bdev_null0 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 [2024-12-16 13:05:37.695169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 bdev_null1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:12.511 { 00:44:12.511 "params": { 00:44:12.511 "name": "Nvme$subsystem", 00:44:12.511 "trtype": "$TEST_TRANSPORT", 00:44:12.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:12.511 "adrfam": "ipv4", 00:44:12.511 "trsvcid": "$NVMF_PORT", 00:44:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:12.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:12.511 "hdgst": ${hdgst:-false}, 00:44:12.511 "ddgst": ${ddgst:-false} 00:44:12.511 }, 00:44:12.511 "method": "bdev_nvme_attach_controller" 00:44:12.511 } 00:44:12.511 EOF 00:44:12.511 )") 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:12.511 { 00:44:12.511 "params": { 00:44:12.511 "name": "Nvme$subsystem", 00:44:12.511 "trtype": "$TEST_TRANSPORT", 00:44:12.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:12.511 "adrfam": "ipv4", 00:44:12.511 "trsvcid": "$NVMF_PORT", 00:44:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:12.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:12.511 "hdgst": ${hdgst:-false}, 00:44:12.511 "ddgst": ${ddgst:-false} 00:44:12.511 }, 00:44:12.511 "method": "bdev_nvme_attach_controller" 00:44:12.511 } 00:44:12.511 EOF 00:44:12.511 )") 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:12.511 "params": { 00:44:12.511 "name": "Nvme0", 00:44:12.511 "trtype": "tcp", 00:44:12.511 "traddr": "10.0.0.2", 00:44:12.511 "adrfam": "ipv4", 00:44:12.511 "trsvcid": "4420", 00:44:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:12.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:12.511 "hdgst": false, 00:44:12.511 "ddgst": false 00:44:12.511 }, 00:44:12.511 "method": "bdev_nvme_attach_controller" 00:44:12.511 },{ 00:44:12.511 "params": { 00:44:12.511 "name": "Nvme1", 00:44:12.511 "trtype": "tcp", 00:44:12.511 "traddr": "10.0.0.2", 00:44:12.511 "adrfam": "ipv4", 00:44:12.511 "trsvcid": "4420", 00:44:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:12.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:12.511 "hdgst": false, 00:44:12.511 "ddgst": false 00:44:12.511 }, 00:44:12.511 "method": "bdev_nvme_attach_controller" 00:44:12.511 }' 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:12.511 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:12.512 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:12.512 13:05:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.512 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:12.512 ... 00:44:12.512 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:12.512 ... 00:44:12.512 fio-3.35 00:44:12.512 Starting 4 threads 00:44:17.783 00:44:17.783 filename0: (groupid=0, jobs=1): err= 0: pid=683760: Mon Dec 16 13:05:43 2024 00:44:17.783 read: IOPS=2844, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:44:17.783 slat (nsec): min=6043, max=53260, avg=8803.80, stdev=3266.72 00:44:17.783 clat (usec): min=982, max=5395, avg=2785.81, stdev=433.20 00:44:17.783 lat (usec): min=988, max=5401, avg=2794.62, stdev=433.14 00:44:17.783 clat percentiles (usec): 00:44:17.783 | 1.00th=[ 1745], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2442], 00:44:17.783 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2933], 00:44:17.783 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3523], 00:44:17.783 | 99.00th=[ 4113], 99.50th=[ 4490], 99.90th=[ 4948], 99.95th=[ 5145], 00:44:17.783 | 99.99th=[ 5407] 00:44:17.783 bw ( KiB/s): min=21056, max=24496, per=26.39%, avg=22588.44, stdev=1058.81, samples=9 00:44:17.783 iops : min= 2632, max= 3062, avg=2823.56, stdev=132.35, samples=9 00:44:17.783 lat (usec) : 1000=0.02% 00:44:17.783 lat (msec) : 2=2.07%, 4=96.58%, 10=1.32% 00:44:17.783 cpu : usr=95.74%, sys=3.94%, ctx=7, majf=0, minf=0 00:44:17.783 IO depths : 1=0.4%, 2=5.4%, 4=65.8%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:17.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.783 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.783 issued rwts: total=14230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.784 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:17.784 filename0: (groupid=0, jobs=1): err= 0: pid=683761: Mon Dec 16 13:05:43 2024 00:44:17.784 read: IOPS=2585, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:44:17.784 slat (nsec): min=6075, max=49475, avg=8810.77, stdev=3217.92 00:44:17.784 clat (usec): min=602, max=5543, avg=3068.21, stdev=498.42 00:44:17.784 lat (usec): min=614, max=5549, avg=3077.02, stdev=498.05 00:44:17.784 clat percentiles (usec): 00:44:17.784 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:44:17.784 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:44:17.784 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 4015], 00:44:17.784 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5276], 99.95th=[ 5342], 00:44:17.784 | 99.99th=[ 5538] 00:44:17.784 bw ( KiB/s): min=19456, max=22032, per=24.35%, avg=20842.67, stdev=920.03, samples=9 00:44:17.784 iops : min= 2432, max= 2754, avg=2605.33, stdev=115.00, samples=9 00:44:17.784 lat (usec) : 750=0.01%, 1000=0.02% 00:44:17.784 lat (msec) : 2=0.51%, 4=94.15%, 10=5.31% 00:44:17.784 cpu : usr=95.86%, sys=3.82%, ctx=6, majf=0, minf=9 00:44:17.784 IO depths : 1=0.1%, 2=3.0%, 4=69.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:17.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.784 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.784 issued rwts: total=12931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.784 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:17.784 filename1: (groupid=0, jobs=1): err= 0: pid=683762: Mon Dec 16 13:05:43 2024 00:44:17.784 read: IOPS=2752, BW=21.5MiB/s (22.6MB/s)(108MiB/5003msec) 00:44:17.784 slat (nsec): min=6054, max=47594, avg=8846.40, stdev=3135.83 00:44:17.784 clat (usec): min=917, max=5216, avg=2879.94, stdev=442.52 00:44:17.784 lat (usec): min=930, max=5222, avg=2888.78, stdev=442.35 00:44:17.784 clat percentiles (usec): 00:44:17.784 | 1.00th=[ 1844], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:44:17.784 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 2966], 00:44:17.784 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3392], 95.00th=[ 3654], 00:44:17.784 | 99.00th=[ 4359], 99.50th=[ 4686], 99.90th=[ 5080], 99.95th=[ 5145], 00:44:17.784 | 99.99th=[ 5211] 00:44:17.784 bw ( KiB/s): min=21056, max=23472, per=25.71%, avg=22007.11, stdev=844.79, samples=9 00:44:17.784 iops : min= 2632, max= 2934, avg=2750.89, stdev=105.60, samples=9 00:44:17.784 lat (usec) : 1000=0.01% 00:44:17.784 lat (msec) : 2=1.73%, 4=95.84%, 10=2.43% 00:44:17.784 cpu : usr=95.74%, sys=3.92%, ctx=9, majf=0, minf=0 00:44:17.784 IO depths : 1=0.3%, 2=4.8%, 4=66.2%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:17.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.784 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.784 issued rwts: total=13772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.784 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:17.784 filename1: (groupid=0, jobs=1): err= 0: pid=683763: Mon Dec 16 13:05:43 2024 00:44:17.784 read: IOPS=2516, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5002msec) 00:44:17.784 slat (nsec): min=6068, max=38916, avg=8638.66, stdev=3099.20 00:44:17.784 clat (usec): min=953, max=5800, avg=3152.45, stdev=460.60 00:44:17.784 lat (usec): min=964, max=5820, avg=3161.09, stdev=460.32 00:44:17.784 clat percentiles (usec): 00:44:17.784 | 1.00th=[ 2180], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2900], 00:44:17.784 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3130], 00:44:17.784 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3687], 95.00th=[ 4047], 00:44:17.784 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5473], 00:44:17.784 | 99.99th=[ 5735] 00:44:17.784 bw ( KiB/s): min=19200, max=21312, per=23.63%, avg=20220.44, stdev=673.99, samples=9 00:44:17.784 iops : min= 2400, max= 2664, avg=2527.56, stdev=84.25, samples=9 00:44:17.784 lat (usec) : 1000=0.01% 00:44:17.784 lat (msec) : 2=0.54%, 4=94.19%, 10=5.26% 00:44:17.784 cpu : usr=95.30%, sys=4.36%, ctx=9, majf=0, minf=9 00:44:17.784 IO depths : 1=0.1%, 2=2.6%, 4=70.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:17.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.784 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.784 issued rwts: total=12590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.784 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:17.784 00:44:17.784 Run status group 0 (all jobs): 00:44:17.784 READ: bw=83.6MiB/s (87.6MB/s), 19.7MiB/s-22.2MiB/s (20.6MB/s-23.3MB/s), io=418MiB (438MB), run=5001-5003msec 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.043 13:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.043 13:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.044 13:05:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:18.044 13:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 13:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.044 00:44:18.044 real 0m24.406s 00:44:18.044 user 4m55.064s 00:44:18.044 sys 0m4.990s 00:44:18.044 13:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 ************************************ 00:44:18.044 END TEST fio_dif_rand_params 00:44:18.044 ************************************ 00:44:18.044 13:05:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:18.044 13:05:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:18.044 13:05:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 ************************************ 00:44:18.044 START TEST fio_dif_digest 00:44:18.044 ************************************ 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 bdev_null0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:18.044 [2024-12-16 13:05:44.099872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:18.044 { 00:44:18.044 "params": { 00:44:18.044 "name": "Nvme$subsystem", 00:44:18.044 "trtype": "$TEST_TRANSPORT", 00:44:18.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:18.044 "adrfam": "ipv4", 00:44:18.044 "trsvcid": "$NVMF_PORT", 00:44:18.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:18.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:18.044 "hdgst": ${hdgst:-false}, 00:44:18.044 "ddgst": ${ddgst:-false} 00:44:18.044 }, 00:44:18.044 "method": "bdev_nvme_attach_controller" 00:44:18.044 } 00:44:18.044 EOF 00:44:18.044 )") 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:18.044 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:18.303 "params": { 00:44:18.303 "name": "Nvme0", 00:44:18.303 "trtype": "tcp", 00:44:18.303 "traddr": "10.0.0.2", 00:44:18.303 "adrfam": "ipv4", 00:44:18.303 "trsvcid": "4420", 00:44:18.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:18.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:18.303 "hdgst": true, 00:44:18.303 "ddgst": true 00:44:18.303 }, 00:44:18.303 "method": "bdev_nvme_attach_controller" 00:44:18.303 }' 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:18.303 13:05:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.562 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:18.562 ... 00:44:18.562 fio-3.35 00:44:18.562 Starting 3 threads 00:44:30.770 00:44:30.770 filename0: (groupid=0, jobs=1): err= 0: pid=684792: Mon Dec 16 13:05:54 2024 00:44:30.770 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(375MiB/10047msec) 00:44:30.770 slat (nsec): min=6309, max=27510, avg=11492.40, stdev=1925.95 00:44:30.770 clat (usec): min=5099, max=49309, avg=10015.03, stdev=1222.63 00:44:30.770 lat (usec): min=5109, max=49321, avg=10026.53, stdev=1222.60 00:44:30.770 clat percentiles (usec): 00:44:30.770 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:44:30.770 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:44:30.770 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:44:30.770 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12780], 99.95th=[46400], 00:44:30.770 | 99.99th=[49546] 00:44:30.770 bw ( KiB/s): min=36864, max=39168, per=35.25%, avg=38387.20, stdev=651.22, samples=20 00:44:30.770 iops : min= 288, max= 306, avg=299.90, stdev= 5.09, samples=20 00:44:30.770 lat (msec) : 10=50.45%, 20=49.48%, 50=0.07% 00:44:30.770 cpu : usr=94.27%, sys=5.42%, ctx=21, majf=0, minf=113 00:44:30.770 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:30.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:30.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:30.770 issued rwts: total=3001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:30.770 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:30.770 filename0: (groupid=0, jobs=1): err= 0: pid=684793: Mon Dec 16 13:05:54 2024 00:44:30.770 read: IOPS=281, BW=35.1MiB/s (36.8MB/s)(353MiB/10048msec) 00:44:30.770 slat (nsec): min=6312, max=27821, avg=11650.90, stdev=1710.14 00:44:30.770 clat (usec): min=7602, max=49977, avg=10640.47, stdev=1237.56 00:44:30.770 lat (usec): min=7609, max=49989, avg=10652.12, stdev=1237.51 00:44:30.770 clat percentiles (usec): 00:44:30.770 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:44:30.770 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:44:30.770 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:44:30.770 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13173], 99.95th=[47973], 00:44:30.770 | 99.99th=[50070] 00:44:30.770 bw ( KiB/s): min=35328, max=36608, per=33.18%, avg=36134.40, stdev=441.65, samples=20 00:44:30.770 iops : min= 276, max= 286, avg=282.30, stdev= 3.45, samples=20 00:44:30.770 lat (msec) : 10=17.63%, 20=82.30%, 50=0.07% 00:44:30.770 cpu : usr=94.52%, sys=5.19%, ctx=15, majf=0, minf=64 00:44:30.770 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:30.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:30.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:30.770 issued rwts: total=2825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:30.770 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:30.770 filename0: (groupid=0, jobs=1): err= 0: pid=684794: Mon Dec 16 13:05:54 2024 00:44:30.770 read: IOPS=270, BW=33.9MiB/s (35.5MB/s)(340MiB/10048msec) 00:44:30.770 slat (nsec): min=6373, max=29570, avg=11848.01, stdev=1681.47 00:44:30.770 clat (usec): min=8441, max=52455, avg=11039.58, stdev=1304.27 00:44:30.770 lat (usec): min=8455, max=52466, avg=11051.43, stdev=1304.29 00:44:30.770 clat percentiles (usec): 00:44:30.770 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:44:30.770 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:44:30.770 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:44:30.770 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14615], 99.95th=[48497], 00:44:30.770 | 99.99th=[52691] 00:44:30.770 bw ( KiB/s): min=34048, max=36096, per=31.98%, avg=34828.80, stdev=572.28, samples=20 00:44:30.770 iops : min= 266, max= 282, avg=272.10, stdev= 4.47, samples=20 00:44:30.770 lat (msec) : 10=7.46%, 20=92.47%, 50=0.04%, 100=0.04% 00:44:30.770 cpu : usr=94.74%, sys=4.95%, ctx=16, majf=0, minf=75 00:44:30.770 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:30.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:30.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:30.770 issued rwts: total=2723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:30.770 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:30.770 00:44:30.770 Run status group 0 (all jobs): 00:44:30.770 READ: bw=106MiB/s (112MB/s), 33.9MiB/s-37.3MiB/s (35.5MB/s-39.1MB/s), io=1069MiB (1121MB), run=10047-10048msec 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:30.770 00:44:30.770 real 0m11.028s 00:44:30.770 user 0m35.446s 00:44:30.770 sys 0m1.827s 00:44:30.770 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:30.771 13:05:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:30.771 ************************************ 00:44:30.771 END TEST fio_dif_digest 00:44:30.771 ************************************ 00:44:30.771 13:05:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:30.771 13:05:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:30.771 rmmod nvme_tcp 00:44:30.771 rmmod nvme_fabrics 00:44:30.771 rmmod nvme_keyring 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 676768 ']' 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 676768 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 676768 ']' 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 676768 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 676768 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 676768' 00:44:30.771 killing process with pid 676768 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@969 -- # kill 676768 00:44:30.771 13:05:55 nvmf_dif -- common/autotest_common.sh@974 -- # wait 676768 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:44:30.771 13:05:55 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:32.149 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:44:32.408 Waiting for block devices as requested 00:44:32.408 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:32.408 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:32.667 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:32.667 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:32.667 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:32.926 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:32.926 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:32.926 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:32.926 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:33.184 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:33.184 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:33.184 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:33.443 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:33.443 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:33.443 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:33.443 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:33.702 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:33.702 13:05:59 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.702 13:05:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:33.702 13:05:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:36.237 13:06:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:36.237 00:44:36.237 real 1m13.977s 00:44:36.237 user 7m12.516s 00:44:36.237 sys 0m20.349s 00:44:36.237 13:06:01 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:36.237 13:06:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:36.237 ************************************ 00:44:36.237 END TEST nvmf_dif 00:44:36.237 ************************************ 00:44:36.237 13:06:01 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:36.237 13:06:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:36.237 13:06:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:36.237 13:06:01 -- common/autotest_common.sh@10 -- # set +x 00:44:36.237 ************************************ 00:44:36.237 START TEST nvmf_abort_qd_sizes 00:44:36.237 ************************************ 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:36.237 * Looking for test storage... 00:44:36.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:36.237 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:36.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.238 --rc genhtml_branch_coverage=1 00:44:36.238 --rc genhtml_function_coverage=1 00:44:36.238 --rc genhtml_legend=1 00:44:36.238 --rc geninfo_all_blocks=1 00:44:36.238 --rc geninfo_unexecuted_blocks=1 00:44:36.238 00:44:36.238 ' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:36.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.238 --rc genhtml_branch_coverage=1 00:44:36.238 --rc genhtml_function_coverage=1 00:44:36.238 --rc genhtml_legend=1 00:44:36.238 --rc geninfo_all_blocks=1 00:44:36.238 --rc geninfo_unexecuted_blocks=1 00:44:36.238 00:44:36.238 ' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:36.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.238 --rc genhtml_branch_coverage=1 00:44:36.238 --rc genhtml_function_coverage=1 00:44:36.238 --rc genhtml_legend=1 00:44:36.238 --rc geninfo_all_blocks=1 00:44:36.238 --rc geninfo_unexecuted_blocks=1 00:44:36.238 00:44:36.238 ' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:36.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.238 --rc genhtml_branch_coverage=1 00:44:36.238 --rc genhtml_function_coverage=1 00:44:36.238 --rc genhtml_legend=1 00:44:36.238 --rc geninfo_all_blocks=1 00:44:36.238 --rc geninfo_unexecuted_blocks=1 00:44:36.238 00:44:36.238 ' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:36.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:36.238 13:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:41.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:41.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:41.513 Found net devices under 0000:af:00.0: cvl_0_0 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:41.513 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:41.773 Found net devices under 0000:af:00.1: cvl_0_1 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:41.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:41.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:44:41.773 00:44:41.773 --- 10.0.0.2 ping statistics --- 00:44:41.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:41.773 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:41.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:41.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:44:41.773 00:44:41.773 --- 10.0.0.1 ping statistics --- 00:44:41.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:41.773 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:44:41.773 13:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:44.307 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:44:44.880 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:44.880 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:45.818 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=693198 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 693198 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 693198 ']' 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:45.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:45.818 13:06:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.077 [2024-12-16 13:06:11.900222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:44:46.077 [2024-12-16 13:06:11.900268] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:46.077 [2024-12-16 13:06:11.971614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:46.077 [2024-12-16 13:06:12.014447] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:46.077 [2024-12-16 13:06:12.014489] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:46.077 [2024-12-16 13:06:12.014497] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:46.077 [2024-12-16 13:06:12.014504] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:46.077 [2024-12-16 13:06:12.014510] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:46.077 [2024-12-16 13:06:12.014568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:46.077 [2024-12-16 13:06:12.014694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:46.077 [2024-12-16 13:06:12.014799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:46.077 [2024-12-16 13:06:12.014801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:46.077 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:46.077 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:44:46.077 13:06:12 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:46.077 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:46.077 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:46.336 13:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.336 ************************************ 00:44:46.336 START TEST spdk_target_abort 00:44:46.337 ************************************ 00:44:46.337 13:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:44:46.337 13:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:46.337 13:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:44:46.337 13:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.337 13:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.625 spdk_targetn1 00:44:49.625 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.626 [2024-12-16 13:06:15.015010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.626 [2024-12-16 13:06:15.056057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:49.626 13:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:52.160 Initializing NVMe Controllers 00:44:52.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:52.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:52.160 Initialization complete. Launching workers. 00:44:52.160 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16310, failed: 0 00:44:52.160 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1441, failed to submit 14869 00:44:52.160 success 689, unsuccessful 752, failed 0 00:44:52.160 13:06:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:52.160 13:06:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:55.499 Initializing NVMe Controllers 00:44:55.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:55.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:55.499 Initialization complete. Launching workers. 00:44:55.499 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8544, failed: 0 00:44:55.500 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7283 00:44:55.500 success 358, unsuccessful 903, failed 0 00:44:55.500 13:06:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:55.500 13:06:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:58.909 Initializing NVMe Controllers 00:44:58.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:58.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:58.909 Initialization complete. Launching workers. 00:44:58.909 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38619, failed: 0 00:44:58.909 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2970, failed to submit 35649 00:44:58.909 success 611, unsuccessful 2359, failed 0 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.909 13:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 693198 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 693198 ']' 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 693198 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:00.286 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 693198 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 693198' 00:45:00.286 killing process with pid 693198 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 693198 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 693198 00:45:00.286 00:45:00.286 real 0m14.025s 00:45:00.286 user 0m53.590s 00:45:00.286 sys 0m2.275s 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:00.286 ************************************ 00:45:00.286 END TEST spdk_target_abort 00:45:00.286 ************************************ 00:45:00.286 13:06:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:00.286 13:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:00.286 13:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:00.286 13:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:00.286 ************************************ 00:45:00.286 START TEST kernel_target_abort 00:45:00.286 ************************************ 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:45:00.286 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:00.287 13:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:02.824 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:45:03.083 Waiting for block devices as requested 00:45:03.343 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:45:03.343 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:03.343 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:03.603 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:03.603 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:03.603 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:03.862 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:03.862 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:03.862 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:03.862 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:04.121 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:04.121 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:04.121 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:04.379 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:04.379 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:04.379 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:04.638 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:04.638 No valid GPT data, bailing 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:45:04.638 No valid GPT data, bailing 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n2 ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n2 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ host-managed != none ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # continue 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:04.638 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:45:04.639 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:45:04.898 00:45:04.898 Discovery Log Number of Records 2, Generation counter 2 00:45:04.898 =====Discovery Log Entry 0====== 00:45:04.898 trtype: tcp 00:45:04.898 adrfam: ipv4 00:45:04.898 subtype: current discovery subsystem 00:45:04.898 treq: not specified, sq flow control disable supported 00:45:04.898 portid: 1 00:45:04.898 trsvcid: 4420 00:45:04.898 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:04.898 traddr: 10.0.0.1 00:45:04.898 eflags: none 00:45:04.898 sectype: none 00:45:04.898 =====Discovery Log Entry 1====== 00:45:04.898 trtype: tcp 00:45:04.898 adrfam: ipv4 00:45:04.898 subtype: nvme subsystem 00:45:04.898 treq: not specified, sq flow control disable supported 00:45:04.898 portid: 1 00:45:04.898 trsvcid: 4420 00:45:04.898 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:04.898 traddr: 10.0.0.1 00:45:04.898 eflags: none 00:45:04.898 sectype: none 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:04.898 13:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:08.187 Initializing NVMe Controllers 00:45:08.187 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:08.187 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:08.187 Initialization complete. Launching workers. 00:45:08.187 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85094, failed: 0 00:45:08.187 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 85094, failed to submit 0 00:45:08.187 success 0, unsuccessful 85094, failed 0 00:45:08.187 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:08.187 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:11.476 Initializing NVMe Controllers 00:45:11.476 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:11.476 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:11.476 Initialization complete. Launching workers. 00:45:11.476 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 140249, failed: 0 00:45:11.476 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32658, failed to submit 107591 00:45:11.476 success 0, unsuccessful 32658, failed 0 00:45:11.476 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:11.476 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:14.766 Initializing NVMe Controllers 00:45:14.766 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:14.766 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:14.766 Initialization complete. Launching workers. 00:45:14.766 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130473, failed: 0 00:45:14.766 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32642, failed to submit 97831 00:45:14.766 success 0, unsuccessful 32642, failed 0 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:45:14.766 13:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:16.672 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:45:17.241 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:45:17.241 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:45:18.178 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:45:18.178 00:45:18.178 real 0m17.885s 00:45:18.178 user 0m8.891s 00:45:18.178 sys 0m5.347s 00:45:18.178 13:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:18.178 13:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:18.178 ************************************ 00:45:18.178 END TEST kernel_target_abort 00:45:18.178 ************************************ 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:18.178 rmmod nvme_tcp 00:45:18.178 rmmod nvme_fabrics 00:45:18.178 rmmod nvme_keyring 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 693198 ']' 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 693198 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 693198 ']' 00:45:18.178 13:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 693198 00:45:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (693198) - No such process 00:45:18.179 13:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 693198 is not found' 00:45:18.179 Process with pid 693198 is not found 00:45:18.179 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:45:18.179 13:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:20.714 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:45:21.282 Waiting for block devices as requested 00:45:21.282 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:45:21.282 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:21.282 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:21.540 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:21.540 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:21.540 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:21.799 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:21.799 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:21.799 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:21.799 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:22.058 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:22.058 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:22.058 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:22.317 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:22.317 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:22.317 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:22.317 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:22.576 13:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:24.482 13:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:24.482 00:45:24.482 real 0m48.753s 00:45:24.482 user 1m6.987s 00:45:24.482 sys 0m16.443s 00:45:24.482 13:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:24.482 13:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:24.482 ************************************ 00:45:24.482 END TEST nvmf_abort_qd_sizes 00:45:24.482 ************************************ 00:45:24.742 13:06:50 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:24.742 13:06:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:24.742 13:06:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:24.742 13:06:50 -- common/autotest_common.sh@10 -- # set +x 00:45:24.742 ************************************ 00:45:24.742 START TEST keyring_file 00:45:24.742 ************************************ 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:24.742 * Looking for test storage... 00:45:24.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:24.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:24.742 --rc genhtml_branch_coverage=1 00:45:24.742 --rc genhtml_function_coverage=1 00:45:24.742 --rc genhtml_legend=1 00:45:24.742 --rc geninfo_all_blocks=1 00:45:24.742 --rc geninfo_unexecuted_blocks=1 00:45:24.742 00:45:24.742 ' 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:24.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:24.742 --rc genhtml_branch_coverage=1 00:45:24.742 --rc genhtml_function_coverage=1 00:45:24.742 --rc genhtml_legend=1 00:45:24.742 --rc geninfo_all_blocks=1 00:45:24.742 --rc geninfo_unexecuted_blocks=1 00:45:24.742 00:45:24.742 ' 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:24.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:24.742 --rc genhtml_branch_coverage=1 00:45:24.742 --rc genhtml_function_coverage=1 00:45:24.742 --rc genhtml_legend=1 00:45:24.742 --rc geninfo_all_blocks=1 00:45:24.742 --rc geninfo_unexecuted_blocks=1 00:45:24.742 00:45:24.742 ' 00:45:24.742 13:06:50 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:24.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:24.742 --rc genhtml_branch_coverage=1 00:45:24.742 --rc genhtml_function_coverage=1 00:45:24.742 --rc genhtml_legend=1 00:45:24.742 --rc geninfo_all_blocks=1 00:45:24.742 --rc geninfo_unexecuted_blocks=1 00:45:24.742 00:45:24.742 ' 00:45:24.742 13:06:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:24.742 13:06:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:24.742 13:06:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:24.742 13:06:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:24.742 13:06:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:24.742 13:06:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:24.742 13:06:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:24.742 13:06:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:24.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:24.742 13:06:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:24.742 13:06:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:24.743 13:06:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:24.743 13:06:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:24.743 13:06:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:24.743 13:06:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:24.743 13:06:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:24.743 13:06:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.H2HgXnikAR 00:45:24.743 13:06:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:24.743 13:06:50 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:24.743 13:06:50 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:24.743 13:06:50 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:24.743 13:06:50 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:24.743 13:06:50 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:24.743 13:06:50 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:25.002 13:06:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.H2HgXnikAR 00:45:25.002 13:06:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.H2HgXnikAR 00:45:25.002 13:06:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.H2HgXnikAR 00:45:25.002 13:06:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:25.002 13:06:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kzBxYH0rQ4 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:25.003 13:06:50 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:25.003 13:06:50 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:25.003 13:06:50 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:25.003 13:06:50 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:45:25.003 13:06:50 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:25.003 13:06:50 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kzBxYH0rQ4 00:45:25.003 13:06:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kzBxYH0rQ4 00:45:25.003 13:06:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kzBxYH0rQ4 00:45:25.003 13:06:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=701853 00:45:25.003 13:06:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 701853 00:45:25.003 13:06:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:25.003 13:06:50 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 701853 ']' 00:45:25.003 13:06:50 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:25.003 13:06:50 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:25.003 13:06:50 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:25.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:25.003 13:06:50 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:25.003 13:06:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:25.003 [2024-12-16 13:06:50.946254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:25.003 [2024-12-16 13:06:50.946304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701853 ] 00:45:25.003 [2024-12-16 13:06:51.016166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:25.003 [2024-12-16 13:06:51.056264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:25.262 13:06:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:25.262 [2024-12-16 13:06:51.263520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:25.262 null0 00:45:25.262 [2024-12-16 13:06:51.295577] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:25.262 [2024-12-16 13:06:51.295883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:25.262 13:06:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.262 13:06:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:25.262 [2024-12-16 13:06:51.323645] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:25.522 request: 00:45:25.522 { 00:45:25.522 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:25.522 "secure_channel": false, 00:45:25.522 "listen_address": { 00:45:25.522 "trtype": "tcp", 00:45:25.522 "traddr": "127.0.0.1", 00:45:25.522 "trsvcid": "4420" 00:45:25.522 }, 00:45:25.522 "method": "nvmf_subsystem_add_listener", 00:45:25.522 "req_id": 1 00:45:25.522 } 00:45:25.522 Got JSON-RPC error response 00:45:25.522 response: 00:45:25.522 { 00:45:25.522 "code": -32602, 00:45:25.522 "message": "Invalid parameters" 00:45:25.522 } 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:25.522 13:06:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=701863 00:45:25.522 13:06:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 701863 /var/tmp/bperf.sock 00:45:25.522 13:06:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 701863 ']' 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:25.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:25.522 [2024-12-16 13:06:51.376696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:25.522 [2024-12-16 13:06:51.376744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701863 ] 00:45:25.522 [2024-12-16 13:06:51.443040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:25.522 [2024-12-16 13:06:51.482896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:25.522 13:06:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:25.522 13:06:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:25.522 13:06:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:25.781 13:06:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kzBxYH0rQ4 00:45:25.781 13:06:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kzBxYH0rQ4 00:45:26.040 13:06:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:26.040 13:06:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:26.040 13:06:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.040 13:06:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.040 13:06:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.299 13:06:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.H2HgXnikAR == \/\t\m\p\/\t\m\p\.\H\2\H\g\X\n\i\k\A\R ]] 00:45:26.299 13:06:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:26.299 13:06:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:26.299 13:06:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.kzBxYH0rQ4 == \/\t\m\p\/\t\m\p\.\k\z\B\x\Y\H\0\r\Q\4 ]] 00:45:26.299 13:06:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.299 13:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.558 13:06:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:26.558 13:06:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:26.558 13:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:26.558 13:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.558 13:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.558 13:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:26.558 13:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.817 13:06:52 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:26.818 13:06:52 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.818 13:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:27.077 [2024-12-16 13:06:52.895614] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:27.077 nvme0n1 00:45:27.077 13:06:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:27.077 13:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.077 13:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.077 13:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.077 13:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:27.077 13:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.336 13:06:53 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:27.336 13:06:53 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:27.336 13:06:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:27.336 13:06:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.336 13:06:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.336 13:06:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:27.336 13:06:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.336 13:06:53 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:27.336 13:06:53 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:27.595 Running I/O for 1 seconds... 00:45:28.532 18516.00 IOPS, 72.33 MiB/s 00:45:28.532 Latency(us) 00:45:28.532 [2024-12-16T12:06:54.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:28.532 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:28.532 nvme0n1 : 1.00 18565.61 72.52 0.00 0.00 6882.47 2683.86 46686.60 00:45:28.532 [2024-12-16T12:06:54.599Z] =================================================================================================================== 00:45:28.532 [2024-12-16T12:06:54.599Z] Total : 18565.61 72.52 0.00 0.00 6882.47 2683.86 46686.60 00:45:28.532 { 00:45:28.532 "results": [ 00:45:28.532 { 00:45:28.532 "job": "nvme0n1", 00:45:28.532 "core_mask": "0x2", 00:45:28.532 "workload": "randrw", 00:45:28.532 "percentage": 50, 00:45:28.532 "status": "finished", 00:45:28.532 "queue_depth": 128, 00:45:28.532 "io_size": 4096, 00:45:28.532 "runtime": 1.004276, 00:45:28.532 "iops": 18565.61343694363, 00:45:28.532 "mibps": 72.52192748806105, 00:45:28.532 "io_failed": 0, 00:45:28.532 "io_timeout": 0, 00:45:28.532 "avg_latency_us": 6882.471312569436, 00:45:28.532 "min_latency_us": 2683.8552380952383, 00:45:28.532 "max_latency_us": 46686.59809523809 00:45:28.532 } 00:45:28.532 ], 00:45:28.532 "core_count": 1 00:45:28.532 } 00:45:28.532 13:06:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:28.532 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:28.791 13:06:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:28.791 13:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:28.791 13:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.791 13:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.791 13:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.791 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.050 13:06:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:29.050 13:06:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:29.050 13:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:29.050 13:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.050 13:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.051 13:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:29.051 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.051 13:06:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:29.051 13:06:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:29.051 13:06:55 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:29.051 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:29.309 [2024-12-16 13:06:55.247031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:29.309 [2024-12-16 13:06:55.247912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f500b0 (107): Transport endpoint is not connected 00:45:29.309 [2024-12-16 13:06:55.248907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f500b0 (9): Bad file descriptor 00:45:29.309 [2024-12-16 13:06:55.249908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:29.309 [2024-12-16 13:06:55.249917] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:29.309 [2024-12-16 13:06:55.249924] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:29.309 [2024-12-16 13:06:55.249933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:29.309 request: 00:45:29.309 { 00:45:29.309 "name": "nvme0", 00:45:29.309 "trtype": "tcp", 00:45:29.309 "traddr": "127.0.0.1", 00:45:29.309 "adrfam": "ipv4", 00:45:29.309 "trsvcid": "4420", 00:45:29.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:29.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:29.309 "prchk_reftag": false, 00:45:29.309 "prchk_guard": false, 00:45:29.309 "hdgst": false, 00:45:29.309 "ddgst": false, 00:45:29.309 "psk": "key1", 00:45:29.309 "allow_unrecognized_csi": false, 00:45:29.309 "method": "bdev_nvme_attach_controller", 00:45:29.309 "req_id": 1 00:45:29.309 } 00:45:29.309 Got JSON-RPC error response 00:45:29.309 response: 00:45:29.309 { 00:45:29.309 "code": -5, 00:45:29.309 "message": "Input/output error" 00:45:29.309 } 00:45:29.309 13:06:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:29.309 13:06:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:29.309 13:06:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:29.309 13:06:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:29.309 13:06:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:29.309 13:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.309 13:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.309 13:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.309 13:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:29.309 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.567 13:06:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:29.567 13:06:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:29.567 13:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:29.567 13:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.567 13:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.567 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.567 13:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:29.826 13:06:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:29.826 13:06:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:29.826 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:29.826 13:06:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:29.826 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:30.085 13:06:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:30.085 13:06:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:30.085 13:06:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.343 13:06:56 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:30.343 13:06:56 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.H2HgXnikAR 00:45:30.343 13:06:56 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:30.343 13:06:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:30.343 13:06:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:30.344 [2024-12-16 13:06:56.405979] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H2HgXnikAR': 0100660 00:45:30.344 [2024-12-16 13:06:56.406004] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:30.602 request: 00:45:30.602 { 00:45:30.602 "name": "key0", 00:45:30.602 "path": "/tmp/tmp.H2HgXnikAR", 00:45:30.602 "method": "keyring_file_add_key", 00:45:30.602 "req_id": 1 00:45:30.602 } 00:45:30.602 Got JSON-RPC error response 00:45:30.602 response: 00:45:30.602 { 00:45:30.602 "code": -1, 00:45:30.602 "message": "Operation not permitted" 00:45:30.602 } 00:45:30.602 13:06:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:30.602 13:06:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:30.602 13:06:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:30.602 13:06:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:30.602 13:06:56 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.H2HgXnikAR 00:45:30.602 13:06:56 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:30.602 13:06:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H2HgXnikAR 00:45:30.602 13:06:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.H2HgXnikAR 00:45:30.602 13:06:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:30.602 13:06:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:30.602 13:06:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:30.602 13:06:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:30.602 13:06:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:30.602 13:06:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.862 13:06:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:30.862 13:06:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:30.862 13:06:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:30.862 13:06:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.121 [2024-12-16 13:06:57.011582] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.H2HgXnikAR': No such file or directory 00:45:31.121 [2024-12-16 13:06:57.011604] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:31.121 [2024-12-16 13:06:57.011620] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:31.121 [2024-12-16 13:06:57.011626] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:31.121 [2024-12-16 13:06:57.011633] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:31.121 [2024-12-16 13:06:57.011638] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:31.121 request: 00:45:31.121 { 00:45:31.121 "name": "nvme0", 00:45:31.121 "trtype": "tcp", 00:45:31.121 "traddr": "127.0.0.1", 00:45:31.121 "adrfam": "ipv4", 00:45:31.121 "trsvcid": "4420", 00:45:31.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:31.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:31.121 "prchk_reftag": false, 00:45:31.121 "prchk_guard": false, 00:45:31.121 "hdgst": false, 00:45:31.121 "ddgst": false, 00:45:31.121 "psk": "key0", 00:45:31.121 "allow_unrecognized_csi": false, 00:45:31.121 "method": "bdev_nvme_attach_controller", 00:45:31.121 "req_id": 1 00:45:31.121 } 00:45:31.121 Got JSON-RPC error response 00:45:31.121 response: 00:45:31.121 { 00:45:31.121 "code": -19, 00:45:31.121 "message": "No such device" 00:45:31.121 } 00:45:31.121 13:06:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:31.121 13:06:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:31.121 13:06:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:31.121 13:06:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:31.121 13:06:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:31.121 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:31.380 13:06:57 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PBxWqN8ULj 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:31.380 13:06:57 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:31.380 13:06:57 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:31.380 13:06:57 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:31.380 13:06:57 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:31.380 13:06:57 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:31.380 13:06:57 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PBxWqN8ULj 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PBxWqN8ULj 00:45:31.380 13:06:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.PBxWqN8ULj 00:45:31.380 13:06:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PBxWqN8ULj 00:45:31.380 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PBxWqN8ULj 00:45:31.639 13:06:57 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.639 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.639 nvme0n1 00:45:31.898 13:06:57 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:31.898 13:06:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:31.898 13:06:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.898 13:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.898 13:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:31.898 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.898 13:06:57 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:31.898 13:06:57 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:31.898 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:32.157 13:06:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:32.157 13:06:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:32.157 13:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.157 13:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.157 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.416 13:06:58 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:32.416 13:06:58 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:32.416 13:06:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:32.416 13:06:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.416 13:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.416 13:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.416 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.675 13:06:58 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:32.675 13:06:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:32.675 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:32.675 13:06:58 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:32.675 13:06:58 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:32.675 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.934 13:06:58 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:32.934 13:06:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PBxWqN8ULj 00:45:32.934 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PBxWqN8ULj 00:45:33.194 13:06:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kzBxYH0rQ4 00:45:33.194 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kzBxYH0rQ4 00:45:33.453 13:06:59 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:33.453 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:33.453 nvme0n1 00:45:33.453 13:06:59 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:33.453 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:33.712 13:06:59 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:33.712 "subsystems": [ 00:45:33.712 { 00:45:33.712 "subsystem": "keyring", 00:45:33.712 "config": [ 00:45:33.712 { 00:45:33.712 "method": "keyring_file_add_key", 00:45:33.712 "params": { 00:45:33.712 "name": "key0", 00:45:33.712 "path": "/tmp/tmp.PBxWqN8ULj" 00:45:33.712 } 00:45:33.712 }, 00:45:33.712 { 00:45:33.712 "method": "keyring_file_add_key", 00:45:33.712 "params": { 00:45:33.712 "name": "key1", 00:45:33.712 "path": "/tmp/tmp.kzBxYH0rQ4" 00:45:33.712 } 00:45:33.712 } 00:45:33.712 ] 00:45:33.712 }, 00:45:33.712 { 00:45:33.712 "subsystem": "iobuf", 00:45:33.712 "config": [ 00:45:33.712 { 00:45:33.712 "method": "iobuf_set_options", 00:45:33.712 "params": { 00:45:33.712 "small_pool_count": 8192, 00:45:33.712 "large_pool_count": 1024, 00:45:33.712 "small_bufsize": 8192, 00:45:33.712 "large_bufsize": 135168 00:45:33.712 } 00:45:33.712 } 00:45:33.712 ] 00:45:33.712 }, 00:45:33.712 { 00:45:33.712 "subsystem": "sock", 00:45:33.712 "config": [ 00:45:33.712 { 00:45:33.712 "method": "sock_set_default_impl", 00:45:33.712 "params": { 00:45:33.712 "impl_name": "posix" 00:45:33.712 } 00:45:33.712 }, 00:45:33.712 { 00:45:33.712 "method": "sock_impl_set_options", 00:45:33.712 "params": { 00:45:33.712 "impl_name": "ssl", 00:45:33.712 "recv_buf_size": 4096, 00:45:33.712 "send_buf_size": 4096, 00:45:33.712 "enable_recv_pipe": true, 00:45:33.712 "enable_quickack": false, 00:45:33.712 "enable_placement_id": 0, 00:45:33.712 "enable_zerocopy_send_server": true, 00:45:33.712 "enable_zerocopy_send_client": false, 00:45:33.712 "zerocopy_threshold": 0, 00:45:33.712 "tls_version": 0, 00:45:33.712 "enable_ktls": false 00:45:33.712 } 00:45:33.712 }, 00:45:33.712 { 00:45:33.712 "method": "sock_impl_set_options", 00:45:33.712 "params": { 00:45:33.712 "impl_name": "posix", 00:45:33.712 "recv_buf_size": 2097152, 00:45:33.712 "send_buf_size": 2097152, 00:45:33.712 "enable_recv_pipe": true, 00:45:33.712 "enable_quickack": false, 00:45:33.712 "enable_placement_id": 0, 00:45:33.712 "enable_zerocopy_send_server": true, 00:45:33.712 "enable_zerocopy_send_client": false, 00:45:33.712 "zerocopy_threshold": 0, 00:45:33.712 "tls_version": 0, 00:45:33.712 "enable_ktls": false 00:45:33.712 } 00:45:33.712 } 00:45:33.712 ] 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "subsystem": "vmd", 00:45:33.713 "config": [] 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "subsystem": "accel", 00:45:33.713 "config": [ 00:45:33.713 { 00:45:33.713 "method": "accel_set_options", 00:45:33.713 "params": { 00:45:33.713 "small_cache_size": 128, 00:45:33.713 "large_cache_size": 16, 00:45:33.713 "task_count": 2048, 00:45:33.713 "sequence_count": 2048, 00:45:33.713 "buf_count": 2048 00:45:33.713 } 00:45:33.713 } 00:45:33.713 ] 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "subsystem": "bdev", 00:45:33.713 "config": [ 00:45:33.713 { 00:45:33.713 "method": "bdev_set_options", 00:45:33.713 "params": { 00:45:33.713 "bdev_io_pool_size": 65535, 00:45:33.713 "bdev_io_cache_size": 256, 00:45:33.713 "bdev_auto_examine": true, 00:45:33.713 "iobuf_small_cache_size": 128, 00:45:33.713 "iobuf_large_cache_size": 16 00:45:33.713 } 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "method": "bdev_raid_set_options", 00:45:33.713 "params": { 00:45:33.713 "process_window_size_kb": 1024, 00:45:33.713 "process_max_bandwidth_mb_sec": 0 00:45:33.713 } 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "method": "bdev_iscsi_set_options", 00:45:33.713 "params": { 00:45:33.713 "timeout_sec": 30 00:45:33.713 } 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "method": "bdev_nvme_set_options", 00:45:33.713 "params": { 00:45:33.713 "action_on_timeout": "none", 00:45:33.713 "timeout_us": 0, 00:45:33.713 "timeout_admin_us": 0, 00:45:33.713 "keep_alive_timeout_ms": 10000, 00:45:33.713 "arbitration_burst": 0, 00:45:33.713 "low_priority_weight": 0, 00:45:33.713 "medium_priority_weight": 0, 00:45:33.713 "high_priority_weight": 0, 00:45:33.713 "nvme_adminq_poll_period_us": 10000, 00:45:33.713 "nvme_ioq_poll_period_us": 0, 00:45:33.713 "io_queue_requests": 512, 00:45:33.713 "delay_cmd_submit": true, 00:45:33.713 "transport_retry_count": 4, 00:45:33.713 "bdev_retry_count": 3, 00:45:33.713 "transport_ack_timeout": 0, 00:45:33.713 "ctrlr_loss_timeout_sec": 0, 00:45:33.713 "reconnect_delay_sec": 0, 00:45:33.713 "fast_io_fail_timeout_sec": 0, 00:45:33.713 "disable_auto_failback": false, 00:45:33.713 "generate_uuids": false, 00:45:33.713 "transport_tos": 0, 00:45:33.713 "nvme_error_stat": false, 00:45:33.713 "rdma_srq_size": 0, 00:45:33.713 "io_path_stat": false, 00:45:33.713 "allow_accel_sequence": false, 00:45:33.713 "rdma_max_cq_size": 0, 00:45:33.713 "rdma_cm_event_timeout_ms": 0, 00:45:33.713 "dhchap_digests": [ 00:45:33.713 "sha256", 00:45:33.713 "sha384", 00:45:33.713 "sha512" 00:45:33.713 ], 00:45:33.713 "dhchap_dhgroups": [ 00:45:33.713 "null", 00:45:33.713 "ffdhe2048", 00:45:33.713 "ffdhe3072", 00:45:33.713 "ffdhe4096", 00:45:33.713 "ffdhe6144", 00:45:33.713 "ffdhe8192" 00:45:33.713 ] 00:45:33.713 } 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "method": "bdev_nvme_attach_controller", 00:45:33.713 "params": { 00:45:33.713 "name": "nvme0", 00:45:33.713 "trtype": "TCP", 00:45:33.713 "adrfam": "IPv4", 00:45:33.713 "traddr": "127.0.0.1", 00:45:33.713 "trsvcid": "4420", 00:45:33.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.713 "prchk_reftag": false, 00:45:33.713 "prchk_guard": false, 00:45:33.713 "ctrlr_loss_timeout_sec": 0, 00:45:33.713 "reconnect_delay_sec": 0, 00:45:33.713 "fast_io_fail_timeout_sec": 0, 00:45:33.713 "psk": "key0", 00:45:33.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:33.713 "hdgst": false, 00:45:33.713 "ddgst": false 00:45:33.713 } 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "method": "bdev_nvme_set_hotplug", 00:45:33.713 "params": { 00:45:33.713 "period_us": 100000, 00:45:33.713 "enable": false 00:45:33.713 } 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "method": "bdev_wait_for_examine" 00:45:33.713 } 00:45:33.713 ] 00:45:33.713 }, 00:45:33.713 { 00:45:33.713 "subsystem": "nbd", 00:45:33.713 "config": [] 00:45:33.713 } 00:45:33.713 ] 00:45:33.713 }' 00:45:33.713 13:06:59 keyring_file -- keyring/file.sh@115 -- # killprocess 701863 00:45:33.713 13:06:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 701863 ']' 00:45:33.713 13:06:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 701863 00:45:33.713 13:06:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701863 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701863' 00:45:33.973 killing process with pid 701863 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@969 -- # kill 701863 00:45:33.973 Received shutdown signal, test time was about 1.000000 seconds 00:45:33.973 00:45:33.973 Latency(us) 00:45:33.973 [2024-12-16T12:07:00.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:33.973 [2024-12-16T12:07:00.040Z] =================================================================================================================== 00:45:33.973 [2024-12-16T12:07:00.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:33.973 13:06:59 keyring_file -- common/autotest_common.sh@974 -- # wait 701863 00:45:33.973 13:07:00 keyring_file -- keyring/file.sh@118 -- # bperfpid=703333 00:45:33.973 13:07:00 keyring_file -- keyring/file.sh@120 -- # waitforlisten 703333 /var/tmp/bperf.sock 00:45:33.973 13:07:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 703333 ']' 00:45:33.973 13:07:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:33.973 13:07:00 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:33.973 13:07:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:33.973 13:07:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:33.973 13:07:00 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:33.973 "subsystems": [ 00:45:33.973 { 00:45:33.973 "subsystem": "keyring", 00:45:33.973 "config": [ 00:45:33.973 { 00:45:33.973 "method": "keyring_file_add_key", 00:45:33.973 "params": { 00:45:33.973 "name": "key0", 00:45:33.973 "path": "/tmp/tmp.PBxWqN8ULj" 00:45:33.973 } 00:45:33.973 }, 00:45:33.973 { 00:45:33.973 "method": "keyring_file_add_key", 00:45:33.973 "params": { 00:45:33.973 "name": "key1", 00:45:33.973 "path": "/tmp/tmp.kzBxYH0rQ4" 00:45:33.973 } 00:45:33.973 } 00:45:33.973 ] 00:45:33.973 }, 00:45:33.973 { 00:45:33.973 "subsystem": "iobuf", 00:45:33.973 "config": [ 00:45:33.973 { 00:45:33.973 "method": "iobuf_set_options", 00:45:33.973 "params": { 00:45:33.973 "small_pool_count": 8192, 00:45:33.973 "large_pool_count": 1024, 00:45:33.973 "small_bufsize": 8192, 00:45:33.973 "large_bufsize": 135168 00:45:33.973 } 00:45:33.973 } 00:45:33.973 ] 00:45:33.973 }, 00:45:33.973 { 00:45:33.973 "subsystem": "sock", 00:45:33.973 "config": [ 00:45:33.973 { 00:45:33.973 "method": "sock_set_default_impl", 00:45:33.973 "params": { 00:45:33.973 "impl_name": "posix" 00:45:33.973 } 00:45:33.973 }, 00:45:33.973 { 00:45:33.973 "method": "sock_impl_set_options", 00:45:33.973 "params": { 00:45:33.973 "impl_name": "ssl", 00:45:33.973 "recv_buf_size": 4096, 00:45:33.973 "send_buf_size": 4096, 00:45:33.973 "enable_recv_pipe": true, 00:45:33.973 "enable_quickack": false, 00:45:33.973 "enable_placement_id": 0, 00:45:33.973 "enable_zerocopy_send_server": true, 00:45:33.973 "enable_zerocopy_send_client": false, 00:45:33.973 "zerocopy_threshold": 0, 00:45:33.973 "tls_version": 0, 00:45:33.973 "enable_ktls": false 00:45:33.973 } 00:45:33.973 }, 00:45:33.973 { 00:45:33.973 "method": "sock_impl_set_options", 00:45:33.973 "params": { 00:45:33.973 "impl_name": "posix", 00:45:33.973 "recv_buf_size": 2097152, 00:45:33.973 "send_buf_size": 2097152, 00:45:33.973 "enable_recv_pipe": true, 00:45:33.973 "enable_quickack": false, 00:45:33.973 "enable_placement_id": 0, 00:45:33.973 "enable_zerocopy_send_server": true, 00:45:33.973 "enable_zerocopy_send_client": false, 00:45:33.973 "zerocopy_threshold": 0, 00:45:33.974 "tls_version": 0, 00:45:33.974 "enable_ktls": false 00:45:33.974 } 00:45:33.974 } 00:45:33.974 ] 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "subsystem": "vmd", 00:45:33.974 "config": [] 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "subsystem": "accel", 00:45:33.974 "config": [ 00:45:33.974 { 00:45:33.974 "method": "accel_set_options", 00:45:33.974 "params": { 00:45:33.974 "small_cache_size": 128, 00:45:33.974 "large_cache_size": 16, 00:45:33.974 "task_count": 2048, 00:45:33.974 "sequence_count": 2048, 00:45:33.974 "buf_count": 2048 00:45:33.974 } 00:45:33.974 } 00:45:33.974 ] 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "subsystem": "bdev", 00:45:33.974 "config": [ 00:45:33.974 { 00:45:33.974 "method": "bdev_set_options", 00:45:33.974 "params": { 00:45:33.974 "bdev_io_pool_size": 65535, 00:45:33.974 "bdev_io_cache_size": 256, 00:45:33.974 "bdev_auto_examine": true, 00:45:33.974 "iobuf_small_cache_size": 128, 00:45:33.974 "iobuf_large_cache_size": 16 00:45:33.974 } 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "method": "bdev_raid_set_options", 00:45:33.974 "params": { 00:45:33.974 "process_window_size_kb": 1024, 00:45:33.974 "process_max_bandwidth_mb_sec": 0 00:45:33.974 } 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "method": "bdev_iscsi_set_options", 00:45:33.974 "params": { 00:45:33.974 "timeout_sec": 30 00:45:33.974 } 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "method": "bdev_nvme_set_options", 00:45:33.974 "params": { 00:45:33.974 "action_on_timeout": "none", 00:45:33.974 "timeout_us": 0, 00:45:33.974 "timeout_admin_us": 0, 00:45:33.974 "keep_alive_timeout_ms": 10000, 00:45:33.974 "arbitration_burst": 0, 00:45:33.974 "low_priority_weight": 0, 00:45:33.974 "medium_priority_weight": 0, 00:45:33.974 "high_priority_weight": 0, 00:45:33.974 "nvme_adminq_poll_period_us": 10000, 00:45:33.974 "nvme_ioq_poll_period_us": 0, 00:45:33.974 "io_queue_requests": 512, 00:45:33.974 "delay_cmd_submit": true, 00:45:33.974 "transport_retry_count": 4, 00:45:33.974 "bdev_retry_count": 3, 00:45:33.974 "transport_ack_timeout": 0, 00:45:33.974 "ctrlr_loss_timeout_sec": 0, 00:45:33.974 "reconnect_delay_sec": 0, 00:45:33.974 "fast_io_fail_timeout_sec": 0, 00:45:33.974 "disable_auto_failback": false, 00:45:33.974 "generate_uuids": false, 00:45:33.974 "transport_tos": 0, 00:45:33.974 "nvme_error_stat": false, 00:45:33.974 "rdma_srq_size": 0, 00:45:33.974 "io_path_stat": false, 00:45:33.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:33.974 "allow_accel_sequence": false, 00:45:33.974 "rdma_max_cq_size": 0, 00:45:33.974 "rdma_cm_event_timeout_ms": 0, 00:45:33.974 "dhchap_digests": [ 00:45:33.974 "sha256", 00:45:33.974 "sha384", 00:45:33.974 "sha512" 00:45:33.974 ], 00:45:33.974 "dhchap_dhgroups": [ 00:45:33.974 "null", 00:45:33.974 "ffdhe2048", 00:45:33.974 "ffdhe3072", 00:45:33.974 "ffdhe4096", 00:45:33.974 "ffdhe6144", 00:45:33.974 "ffdhe8192" 00:45:33.974 ] 00:45:33.974 } 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "method": "bdev_nvme_attach_controller", 00:45:33.974 "params": { 00:45:33.974 "name": "nvme0", 00:45:33.974 "trtype": "TCP", 00:45:33.974 "adrfam": "IPv4", 00:45:33.974 "traddr": "127.0.0.1", 00:45:33.974 "trsvcid": "4420", 00:45:33.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.974 "prchk_reftag": false, 00:45:33.974 "prchk_guard": false, 00:45:33.974 "ctrlr_loss_timeout_sec": 0, 00:45:33.974 "reconnect_delay_sec": 0, 00:45:33.974 "fast_io_fail_timeout_sec": 0, 00:45:33.974 "psk": "key0", 00:45:33.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:33.974 "hdgst": false, 00:45:33.974 "ddgst": false 00:45:33.974 } 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "method": "bdev_nvme_set_hotplug", 00:45:33.974 "params": { 00:45:33.974 "period_us": 100000, 00:45:33.974 "enable": false 00:45:33.974 } 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "method": "bdev_wait_for_examine" 00:45:33.974 } 00:45:33.974 ] 00:45:33.974 }, 00:45:33.974 { 00:45:33.974 "subsystem": "nbd", 00:45:33.974 "config": [] 00:45:33.974 } 00:45:33.974 ] 00:45:33.974 }' 00:45:33.974 13:07:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:33.974 13:07:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:34.233 [2024-12-16 13:07:00.055010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:34.233 [2024-12-16 13:07:00.055063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703333 ] 00:45:34.233 [2024-12-16 13:07:00.124065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:34.233 [2024-12-16 13:07:00.163573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:34.492 [2024-12-16 13:07:00.318034] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:35.059 13:07:00 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:35.059 13:07:00 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:35.059 13:07:00 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:35.059 13:07:00 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:35.059 13:07:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.059 13:07:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:35.059 13:07:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:35.059 13:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:35.059 13:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:35.059 13:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:35.059 13:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:35.059 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.318 13:07:01 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:35.318 13:07:01 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:35.318 13:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:35.318 13:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:35.318 13:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:35.318 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.318 13:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:35.577 13:07:01 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:35.577 13:07:01 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:35.577 13:07:01 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:35.577 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:35.862 13:07:01 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:35.862 13:07:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:35.862 13:07:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PBxWqN8ULj /tmp/tmp.kzBxYH0rQ4 00:45:35.862 13:07:01 keyring_file -- keyring/file.sh@20 -- # killprocess 703333 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 703333 ']' 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@954 -- # kill -0 703333 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 703333 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 703333' 00:45:35.862 killing process with pid 703333 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@969 -- # kill 703333 00:45:35.862 Received shutdown signal, test time was about 1.000000 seconds 00:45:35.862 00:45:35.862 Latency(us) 00:45:35.862 [2024-12-16T12:07:01.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:35.862 [2024-12-16T12:07:01.929Z] =================================================================================================================== 00:45:35.862 [2024-12-16T12:07:01.929Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@974 -- # wait 703333 00:45:35.862 13:07:01 keyring_file -- keyring/file.sh@21 -- # killprocess 701853 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 701853 ']' 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@954 -- # kill -0 701853 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:35.862 13:07:01 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701853 00:45:36.121 13:07:01 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:36.121 13:07:01 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:36.121 13:07:01 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701853' 00:45:36.121 killing process with pid 701853 00:45:36.121 13:07:01 keyring_file -- common/autotest_common.sh@969 -- # kill 701853 00:45:36.121 13:07:01 keyring_file -- common/autotest_common.sh@974 -- # wait 701853 00:45:36.380 00:45:36.380 real 0m11.704s 00:45:36.380 user 0m28.992s 00:45:36.380 sys 0m2.726s 00:45:36.380 13:07:02 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:36.380 13:07:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:36.380 ************************************ 00:45:36.380 END TEST keyring_file 00:45:36.380 ************************************ 00:45:36.380 13:07:02 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:45:36.380 13:07:02 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:36.380 13:07:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:36.380 13:07:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:36.380 13:07:02 -- common/autotest_common.sh@10 -- # set +x 00:45:36.380 ************************************ 00:45:36.380 START TEST keyring_linux 00:45:36.380 ************************************ 00:45:36.380 13:07:02 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:36.380 Joined session keyring: 894315673 00:45:36.380 * Looking for test storage... 00:45:36.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:36.380 13:07:02 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:36.380 13:07:02 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:45:36.380 13:07:02 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:36.641 13:07:02 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:36.641 13:07:02 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:36.641 13:07:02 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.641 --rc genhtml_branch_coverage=1 00:45:36.641 --rc genhtml_function_coverage=1 00:45:36.641 --rc genhtml_legend=1 00:45:36.641 --rc geninfo_all_blocks=1 00:45:36.641 --rc geninfo_unexecuted_blocks=1 00:45:36.641 00:45:36.641 ' 00:45:36.641 13:07:02 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.641 --rc genhtml_branch_coverage=1 00:45:36.641 --rc genhtml_function_coverage=1 00:45:36.641 --rc genhtml_legend=1 00:45:36.641 --rc geninfo_all_blocks=1 00:45:36.641 --rc geninfo_unexecuted_blocks=1 00:45:36.641 00:45:36.641 ' 00:45:36.641 13:07:02 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.641 --rc genhtml_branch_coverage=1 00:45:36.641 --rc genhtml_function_coverage=1 00:45:36.641 --rc genhtml_legend=1 00:45:36.641 --rc geninfo_all_blocks=1 00:45:36.641 --rc geninfo_unexecuted_blocks=1 00:45:36.641 00:45:36.641 ' 00:45:36.641 13:07:02 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.641 --rc genhtml_branch_coverage=1 00:45:36.641 --rc genhtml_function_coverage=1 00:45:36.641 --rc genhtml_legend=1 00:45:36.641 --rc geninfo_all_blocks=1 00:45:36.641 --rc geninfo_unexecuted_blocks=1 00:45:36.641 00:45:36.641 ' 00:45:36.641 13:07:02 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:36.641 13:07:02 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:36.641 13:07:02 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:36.641 13:07:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.641 13:07:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.641 13:07:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.641 13:07:02 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:36.641 13:07:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:36.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:36.641 13:07:02 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:36.641 13:07:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:36.641 13:07:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@729 -- # python - 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:36.642 /tmp/:spdk-test:key0 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:45:36.642 13:07:02 keyring_linux -- nvmf/common.sh@729 -- # python - 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:36.642 13:07:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:36.642 /tmp/:spdk-test:key1 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=703874 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 703874 00:45:36.642 13:07:02 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:36.642 13:07:02 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 703874 ']' 00:45:36.642 13:07:02 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:36.642 13:07:02 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:36.642 13:07:02 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:36.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:36.642 13:07:02 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:36.642 13:07:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:36.642 [2024-12-16 13:07:02.681976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:36.642 [2024-12-16 13:07:02.682027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703874 ] 00:45:36.901 [2024-12-16 13:07:02.751430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:36.901 [2024-12-16 13:07:02.790951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:37.160 13:07:02 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:37.160 13:07:02 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:37.160 13:07:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:37.160 13:07:02 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.160 13:07:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:37.160 [2024-12-16 13:07:02.981050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:37.160 null0 00:45:37.160 [2024-12-16 13:07:03.013108] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:37.160 [2024-12-16 13:07:03.013394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.160 13:07:03 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:37.160 285390766 00:45:37.160 13:07:03 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:37.160 344281820 00:45:37.160 13:07:03 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=703879 00:45:37.160 13:07:03 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 703879 /var/tmp/bperf.sock 00:45:37.160 13:07:03 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 703879 ']' 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:37.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:37.160 13:07:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:37.160 [2024-12-16 13:07:03.083599] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:37.160 [2024-12-16 13:07:03.083647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703879 ] 00:45:37.160 [2024-12-16 13:07:03.149409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:37.160 [2024-12-16 13:07:03.188063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:37.420 13:07:03 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:37.420 13:07:03 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:37.420 13:07:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:37.420 13:07:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:37.420 13:07:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:37.420 13:07:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:37.679 13:07:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:37.679 13:07:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:37.938 [2024-12-16 13:07:03.830926] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:37.938 nvme0n1 00:45:37.938 13:07:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:37.938 13:07:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:37.938 13:07:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:37.938 13:07:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:37.938 13:07:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:37.938 13:07:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:38.196 13:07:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:38.196 13:07:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:38.196 13:07:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:38.196 13:07:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:38.196 13:07:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:38.196 13:07:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:38.196 13:07:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@25 -- # sn=285390766 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 285390766 == \2\8\5\3\9\0\7\6\6 ]] 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 285390766 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:38.455 13:07:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:38.455 Running I/O for 1 seconds... 00:45:39.391 21500.00 IOPS, 83.98 MiB/s 00:45:39.391 Latency(us) 00:45:39.391 [2024-12-16T12:07:05.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.391 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:39.391 nvme0n1 : 1.01 21500.94 83.99 0.00 0.00 5933.58 4681.14 11671.65 00:45:39.391 [2024-12-16T12:07:05.458Z] =================================================================================================================== 00:45:39.391 [2024-12-16T12:07:05.458Z] Total : 21500.94 83.99 0.00 0.00 5933.58 4681.14 11671.65 00:45:39.391 { 00:45:39.391 "results": [ 00:45:39.391 { 00:45:39.391 "job": "nvme0n1", 00:45:39.391 "core_mask": "0x2", 00:45:39.391 "workload": "randread", 00:45:39.391 "status": "finished", 00:45:39.391 "queue_depth": 128, 00:45:39.391 "io_size": 4096, 00:45:39.391 "runtime": 1.005956, 00:45:39.391 "iops": 21500.940398983654, 00:45:39.391 "mibps": 83.9880484335299, 00:45:39.391 "io_failed": 0, 00:45:39.391 "io_timeout": 0, 00:45:39.391 "avg_latency_us": 5933.57708438186, 00:45:39.391 "min_latency_us": 4681.142857142857, 00:45:39.391 "max_latency_us": 11671.649523809523 00:45:39.391 } 00:45:39.391 ], 00:45:39.391 "core_count": 1 00:45:39.391 } 00:45:39.391 13:07:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:39.391 13:07:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:39.650 13:07:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:39.650 13:07:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:39.650 13:07:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:39.650 13:07:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:39.650 13:07:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.650 13:07:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:39.909 13:07:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:39.909 13:07:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:39.909 13:07:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:39.909 13:07:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:39.909 13:07:05 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:39.909 13:07:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:40.169 [2024-12-16 13:07:06.031909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:40.169 [2024-12-16 13:07:06.032575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd4ab0 (107): Transport endpoint is not connected 00:45:40.169 [2024-12-16 13:07:06.033569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd4ab0 (9): Bad file descriptor 00:45:40.169 [2024-12-16 13:07:06.034571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:40.169 [2024-12-16 13:07:06.034580] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:40.169 [2024-12-16 13:07:06.034587] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:40.169 [2024-12-16 13:07:06.034596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:40.169 request: 00:45:40.169 { 00:45:40.169 "name": "nvme0", 00:45:40.169 "trtype": "tcp", 00:45:40.169 "traddr": "127.0.0.1", 00:45:40.169 "adrfam": "ipv4", 00:45:40.169 "trsvcid": "4420", 00:45:40.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:40.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:40.169 "prchk_reftag": false, 00:45:40.169 "prchk_guard": false, 00:45:40.169 "hdgst": false, 00:45:40.169 "ddgst": false, 00:45:40.169 "psk": ":spdk-test:key1", 00:45:40.169 "allow_unrecognized_csi": false, 00:45:40.169 "method": "bdev_nvme_attach_controller", 00:45:40.169 "req_id": 1 00:45:40.169 } 00:45:40.169 Got JSON-RPC error response 00:45:40.169 response: 00:45:40.169 { 00:45:40.169 "code": -5, 00:45:40.169 "message": "Input/output error" 00:45:40.169 } 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@33 -- # sn=285390766 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 285390766 00:45:40.169 1 links removed 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@33 -- # sn=344281820 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 344281820 00:45:40.169 1 links removed 00:45:40.169 13:07:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 703879 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 703879 ']' 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 703879 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 703879 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 703879' 00:45:40.169 killing process with pid 703879 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@969 -- # kill 703879 00:45:40.169 Received shutdown signal, test time was about 1.000000 seconds 00:45:40.169 00:45:40.169 Latency(us) 00:45:40.169 [2024-12-16T12:07:06.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:40.169 [2024-12-16T12:07:06.236Z] =================================================================================================================== 00:45:40.169 [2024-12-16T12:07:06.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:40.169 13:07:06 keyring_linux -- common/autotest_common.sh@974 -- # wait 703879 00:45:40.428 13:07:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 703874 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 703874 ']' 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 703874 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 703874 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 703874' 00:45:40.428 killing process with pid 703874 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@969 -- # kill 703874 00:45:40.428 13:07:06 keyring_linux -- common/autotest_common.sh@974 -- # wait 703874 00:45:40.688 00:45:40.688 real 0m4.332s 00:45:40.688 user 0m8.163s 00:45:40.688 sys 0m1.445s 00:45:40.688 13:07:06 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:40.688 13:07:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:40.688 ************************************ 00:45:40.688 END TEST keyring_linux 00:45:40.688 ************************************ 00:45:40.688 13:07:06 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:40.688 13:07:06 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:45:40.688 13:07:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:40.688 13:07:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:40.688 13:07:06 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:45:40.688 13:07:06 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:45:40.688 13:07:06 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:45:40.688 13:07:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:40.688 13:07:06 -- common/autotest_common.sh@10 -- # set +x 00:45:40.688 13:07:06 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:45:40.688 13:07:06 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:40.688 13:07:06 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:40.688 13:07:06 -- common/autotest_common.sh@10 -- # set +x 00:45:45.962 INFO: APP EXITING 00:45:45.962 INFO: killing all VMs 00:45:45.962 INFO: killing vhost app 00:45:45.962 INFO: EXIT DONE 00:45:48.497 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:45:48.757 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:45:48.757 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:45:48.757 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:45:49.017 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:45:49.017 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:45:49.017 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:45:49.017 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:45:49.017 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:45:49.018 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:45:49.018 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:45:49.018 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:45:51.556 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:45:52.125 Cleaning 00:45:52.125 Removing: /var/run/dpdk/spdk0/config 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:52.125 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:52.125 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:52.125 Removing: /var/run/dpdk/spdk1/config 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:52.125 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:52.125 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:52.125 Removing: /var/run/dpdk/spdk2/config 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:52.125 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:52.125 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:52.125 Removing: /var/run/dpdk/spdk3/config 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:52.125 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:52.125 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:52.125 Removing: /var/run/dpdk/spdk4/config 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:52.125 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:52.125 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:52.125 Removing: /dev/shm/bdev_svc_trace.1 00:45:52.125 Removing: /dev/shm/nvmf_trace.0 00:45:52.125 Removing: /dev/shm/spdk_tgt_trace.pid152310 00:45:52.125 Removing: /var/run/dpdk/spdk0 00:45:52.384 Removing: /var/run/dpdk/spdk1 00:45:52.384 Removing: /var/run/dpdk/spdk2 00:45:52.384 Removing: /var/run/dpdk/spdk3 00:45:52.384 Removing: /var/run/dpdk/spdk4 00:45:52.384 Removing: /var/run/dpdk/spdk_pid150044 00:45:52.384 Removing: /var/run/dpdk/spdk_pid151164 00:45:52.384 Removing: /var/run/dpdk/spdk_pid152310 00:45:52.384 Removing: /var/run/dpdk/spdk_pid152890 00:45:52.384 Removing: /var/run/dpdk/spdk_pid153816 00:45:52.384 Removing: /var/run/dpdk/spdk_pid153895 00:45:52.384 Removing: /var/run/dpdk/spdk_pid154859 00:45:52.384 Removing: /var/run/dpdk/spdk_pid155023 00:45:52.384 Removing: /var/run/dpdk/spdk_pid155243 00:45:52.384 Removing: /var/run/dpdk/spdk_pid156923 00:45:52.384 Removing: /var/run/dpdk/spdk_pid158014 00:45:52.384 Removing: /var/run/dpdk/spdk_pid158456 00:45:52.384 Removing: /var/run/dpdk/spdk_pid158629 00:45:52.384 Removing: /var/run/dpdk/spdk_pid158848 00:45:52.384 Removing: /var/run/dpdk/spdk_pid159137 00:45:52.384 Removing: /var/run/dpdk/spdk_pid159384 00:45:52.384 Removing: /var/run/dpdk/spdk_pid159627 00:45:52.384 Removing: /var/run/dpdk/spdk_pid159915 00:45:52.384 Removing: /var/run/dpdk/spdk_pid160661 00:45:52.384 Removing: /var/run/dpdk/spdk_pid163823 00:45:52.384 Removing: /var/run/dpdk/spdk_pid163940 00:45:52.384 Removing: /var/run/dpdk/spdk_pid164137 00:45:52.384 Removing: /var/run/dpdk/spdk_pid164338 00:45:52.384 Removing: /var/run/dpdk/spdk_pid164637 00:45:52.384 Removing: /var/run/dpdk/spdk_pid164832 00:45:52.384 Removing: /var/run/dpdk/spdk_pid165246 00:45:52.384 Removing: /var/run/dpdk/spdk_pid165328 00:45:52.384 Removing: /var/run/dpdk/spdk_pid165584 00:45:52.384 Removing: /var/run/dpdk/spdk_pid165593 00:45:52.384 Removing: /var/run/dpdk/spdk_pid165846 00:45:52.384 Removing: /var/run/dpdk/spdk_pid165862 00:45:52.384 Removing: /var/run/dpdk/spdk_pid166417 00:45:52.384 Removing: /var/run/dpdk/spdk_pid166660 00:45:52.384 Removing: /var/run/dpdk/spdk_pid166958 00:45:52.384 Removing: /var/run/dpdk/spdk_pid170632 00:45:52.384 Removing: /var/run/dpdk/spdk_pid175067 00:45:52.384 Removing: /var/run/dpdk/spdk_pid185041 00:45:52.384 Removing: /var/run/dpdk/spdk_pid185750 00:45:52.384 Removing: /var/run/dpdk/spdk_pid190432 00:45:52.384 Removing: /var/run/dpdk/spdk_pid190847 00:45:52.384 Removing: /var/run/dpdk/spdk_pid195110 00:45:52.384 Removing: /var/run/dpdk/spdk_pid200934 00:45:52.384 Removing: /var/run/dpdk/spdk_pid203508 00:45:52.384 Removing: /var/run/dpdk/spdk_pid213783 00:45:52.384 Removing: /var/run/dpdk/spdk_pid222652 00:45:52.384 Removing: /var/run/dpdk/spdk_pid224468 00:45:52.384 Removing: /var/run/dpdk/spdk_pid225379 00:45:52.384 Removing: /var/run/dpdk/spdk_pid242576 00:45:52.384 Removing: /var/run/dpdk/spdk_pid246611 00:45:52.384 Removing: /var/run/dpdk/spdk_pid328706 00:45:52.384 Removing: /var/run/dpdk/spdk_pid333884 00:45:52.384 Removing: /var/run/dpdk/spdk_pid339701 00:45:52.384 Removing: /var/run/dpdk/spdk_pid345360 00:45:52.384 Removing: /var/run/dpdk/spdk_pid345375 00:45:52.384 Removing: /var/run/dpdk/spdk_pid346252 00:45:52.384 Removing: /var/run/dpdk/spdk_pid347140 00:45:52.384 Removing: /var/run/dpdk/spdk_pid348026 00:45:52.384 Removing: /var/run/dpdk/spdk_pid348477 00:45:52.384 Removing: /var/run/dpdk/spdk_pid348482 00:45:52.384 Removing: /var/run/dpdk/spdk_pid348707 00:45:52.385 Removing: /var/run/dpdk/spdk_pid348932 00:45:52.385 Removing: /var/run/dpdk/spdk_pid348934 00:45:52.385 Removing: /var/run/dpdk/spdk_pid349816 00:45:52.385 Removing: /var/run/dpdk/spdk_pid350703 00:45:52.385 Removing: /var/run/dpdk/spdk_pid351452 00:45:52.385 Removing: /var/run/dpdk/spdk_pid352046 00:45:52.385 Removing: /var/run/dpdk/spdk_pid352048 00:45:52.385 Removing: /var/run/dpdk/spdk_pid352275 00:45:52.644 Removing: /var/run/dpdk/spdk_pid353376 00:45:52.644 Removing: /var/run/dpdk/spdk_pid354362 00:45:52.644 Removing: /var/run/dpdk/spdk_pid362818 00:45:52.644 Removing: /var/run/dpdk/spdk_pid391199 00:45:52.644 Removing: /var/run/dpdk/spdk_pid395999 00:45:52.644 Removing: /var/run/dpdk/spdk_pid397555 00:45:52.644 Removing: /var/run/dpdk/spdk_pid399325 00:45:52.644 Removing: /var/run/dpdk/spdk_pid399353 00:45:52.644 Removing: /var/run/dpdk/spdk_pid399581 00:45:52.644 Removing: /var/run/dpdk/spdk_pid399679 00:45:52.644 Removing: /var/run/dpdk/spdk_pid400091 00:45:52.644 Removing: /var/run/dpdk/spdk_pid401860 00:45:52.644 Removing: /var/run/dpdk/spdk_pid402652 00:45:52.644 Removing: /var/run/dpdk/spdk_pid403087 00:45:52.644 Removing: /var/run/dpdk/spdk_pid405215 00:45:52.644 Removing: /var/run/dpdk/spdk_pid405595 00:45:52.644 Removing: /var/run/dpdk/spdk_pid406300 00:45:52.644 Removing: /var/run/dpdk/spdk_pid410497 00:45:52.644 Removing: /var/run/dpdk/spdk_pid415786 00:45:52.644 Removing: /var/run/dpdk/spdk_pid415787 00:45:52.644 Removing: /var/run/dpdk/spdk_pid415788 00:45:52.644 Removing: /var/run/dpdk/spdk_pid419494 00:45:52.644 Removing: /var/run/dpdk/spdk_pid423182 00:45:52.644 Removing: /var/run/dpdk/spdk_pid428039 00:45:52.644 Removing: /var/run/dpdk/spdk_pid463171 00:45:52.644 Removing: /var/run/dpdk/spdk_pid466983 00:45:52.644 Removing: /var/run/dpdk/spdk_pid473569 00:45:52.644 Removing: /var/run/dpdk/spdk_pid474713 00:45:52.644 Removing: /var/run/dpdk/spdk_pid476124 00:45:52.644 Removing: /var/run/dpdk/spdk_pid477407 00:45:52.644 Removing: /var/run/dpdk/spdk_pid482073 00:45:52.644 Removing: /var/run/dpdk/spdk_pid485860 00:45:52.644 Removing: /var/run/dpdk/spdk_pid493163 00:45:52.644 Removing: /var/run/dpdk/spdk_pid493290 00:45:52.644 Removing: /var/run/dpdk/spdk_pid497695 00:45:52.644 Removing: /var/run/dpdk/spdk_pid497918 00:45:52.644 Removing: /var/run/dpdk/spdk_pid498136 00:45:52.644 Removing: /var/run/dpdk/spdk_pid498485 00:45:52.644 Removing: /var/run/dpdk/spdk_pid498589 00:45:52.644 Removing: /var/run/dpdk/spdk_pid499939 00:45:52.644 Removing: /var/run/dpdk/spdk_pid501497 00:45:52.644 Removing: /var/run/dpdk/spdk_pid503038 00:45:52.644 Removing: /var/run/dpdk/spdk_pid504679 00:45:52.644 Removing: /var/run/dpdk/spdk_pid506352 00:45:52.644 Removing: /var/run/dpdk/spdk_pid507898 00:45:52.644 Removing: /var/run/dpdk/spdk_pid514130 00:45:52.644 Removing: /var/run/dpdk/spdk_pid514688 00:45:52.644 Removing: /var/run/dpdk/spdk_pid516383 00:45:52.644 Removing: /var/run/dpdk/spdk_pid517393 00:45:52.644 Removing: /var/run/dpdk/spdk_pid522980 00:45:52.644 Removing: /var/run/dpdk/spdk_pid525487 00:45:52.644 Removing: /var/run/dpdk/spdk_pid530705 00:45:52.644 Removing: /var/run/dpdk/spdk_pid535928 00:45:52.644 Removing: /var/run/dpdk/spdk_pid544299 00:45:52.644 Removing: /var/run/dpdk/spdk_pid551419 00:45:52.644 Removing: /var/run/dpdk/spdk_pid551422 00:45:52.644 Removing: /var/run/dpdk/spdk_pid570898 00:45:52.644 Removing: /var/run/dpdk/spdk_pid571358 00:45:52.644 Removing: /var/run/dpdk/spdk_pid571820 00:45:52.644 Removing: /var/run/dpdk/spdk_pid572489 00:45:52.644 Removing: /var/run/dpdk/spdk_pid573001 00:45:52.644 Removing: /var/run/dpdk/spdk_pid573662 00:45:52.644 Removing: /var/run/dpdk/spdk_pid574137 00:45:52.644 Removing: /var/run/dpdk/spdk_pid574647 00:45:52.644 Removing: /var/run/dpdk/spdk_pid578765 00:45:52.644 Removing: /var/run/dpdk/spdk_pid578988 00:45:52.644 Removing: /var/run/dpdk/spdk_pid584885 00:45:52.644 Removing: /var/run/dpdk/spdk_pid584953 00:45:52.644 Removing: /var/run/dpdk/spdk_pid590110 00:45:52.644 Removing: /var/run/dpdk/spdk_pid594253 00:45:52.644 Removing: /var/run/dpdk/spdk_pid604148 00:45:52.644 Removing: /var/run/dpdk/spdk_pid604695 00:45:52.644 Removing: /var/run/dpdk/spdk_pid608822 00:45:52.904 Removing: /var/run/dpdk/spdk_pid609098 00:45:52.904 Removing: /var/run/dpdk/spdk_pid613051 00:45:52.904 Removing: /var/run/dpdk/spdk_pid618556 00:45:52.904 Removing: /var/run/dpdk/spdk_pid621050 00:45:52.904 Removing: /var/run/dpdk/spdk_pid630787 00:45:52.904 Removing: /var/run/dpdk/spdk_pid639293 00:45:52.904 Removing: /var/run/dpdk/spdk_pid640844 00:45:52.904 Removing: /var/run/dpdk/spdk_pid641736 00:45:52.904 Removing: /var/run/dpdk/spdk_pid657876 00:45:52.904 Removing: /var/run/dpdk/spdk_pid661694 00:45:52.904 Removing: /var/run/dpdk/spdk_pid664366 00:45:52.904 Removing: /var/run/dpdk/spdk_pid671898 00:45:52.904 Removing: /var/run/dpdk/spdk_pid671954 00:45:52.904 Removing: /var/run/dpdk/spdk_pid676917 00:45:52.904 Removing: /var/run/dpdk/spdk_pid678677 00:45:52.904 Removing: /var/run/dpdk/spdk_pid680514 00:45:52.904 Removing: /var/run/dpdk/spdk_pid681643 00:45:52.904 Removing: /var/run/dpdk/spdk_pid683634 00:45:52.904 Removing: /var/run/dpdk/spdk_pid684680 00:45:52.904 Removing: /var/run/dpdk/spdk_pid693800 00:45:52.904 Removing: /var/run/dpdk/spdk_pid694244 00:45:52.904 Removing: /var/run/dpdk/spdk_pid694693 00:45:52.904 Removing: /var/run/dpdk/spdk_pid696964 00:45:52.904 Removing: /var/run/dpdk/spdk_pid697492 00:45:52.904 Removing: /var/run/dpdk/spdk_pid698021 00:45:52.904 Removing: /var/run/dpdk/spdk_pid701853 00:45:52.904 Removing: /var/run/dpdk/spdk_pid701863 00:45:52.904 Removing: /var/run/dpdk/spdk_pid703333 00:45:52.904 Removing: /var/run/dpdk/spdk_pid703874 00:45:52.904 Removing: /var/run/dpdk/spdk_pid703879 00:45:52.904 Clean 00:45:52.904 13:07:18 -- common/autotest_common.sh@1451 -- # return 0 00:45:52.904 13:07:18 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:45:52.904 13:07:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:52.904 13:07:18 -- common/autotest_common.sh@10 -- # set +x 00:45:52.904 13:07:18 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:45:52.904 13:07:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:52.904 13:07:18 -- common/autotest_common.sh@10 -- # set +x 00:45:52.904 13:07:18 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:52.904 13:07:18 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:52.904 13:07:18 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:52.904 13:07:18 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:45:52.904 13:07:18 -- spdk/autotest.sh@394 -- # hostname 00:45:52.904 13:07:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:53.163 geninfo: WARNING: invalid characters removed from testname! 00:46:15.097 13:07:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:16.034 13:07:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:17.940 13:07:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:19.845 13:07:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:21.749 13:07:47 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:23.127 13:07:49 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:25.031 13:07:51 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:25.031 13:07:51 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:46:25.031 13:07:51 -- common/autotest_common.sh@1681 -- $ lcov --version 00:46:25.031 13:07:51 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:46:25.291 13:07:51 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:46:25.291 13:07:51 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:46:25.291 13:07:51 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:46:25.291 13:07:51 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:46:25.291 13:07:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:46:25.291 13:07:51 -- scripts/common.sh@336 -- $ read -ra ver1 00:46:25.291 13:07:51 -- scripts/common.sh@337 -- $ IFS=.-: 00:46:25.291 13:07:51 -- scripts/common.sh@337 -- $ read -ra ver2 00:46:25.291 13:07:51 -- scripts/common.sh@338 -- $ local 'op=<' 00:46:25.291 13:07:51 -- scripts/common.sh@340 -- $ ver1_l=2 00:46:25.291 13:07:51 -- scripts/common.sh@341 -- $ ver2_l=1 00:46:25.291 13:07:51 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:46:25.291 13:07:51 -- scripts/common.sh@344 -- $ case "$op" in 00:46:25.291 13:07:51 -- scripts/common.sh@345 -- $ : 1 00:46:25.291 13:07:51 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:46:25.291 13:07:51 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:25.291 13:07:51 -- scripts/common.sh@365 -- $ decimal 1 00:46:25.291 13:07:51 -- scripts/common.sh@353 -- $ local d=1 00:46:25.291 13:07:51 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:46:25.291 13:07:51 -- scripts/common.sh@355 -- $ echo 1 00:46:25.291 13:07:51 -- scripts/common.sh@365 -- $ ver1[v]=1 00:46:25.291 13:07:51 -- scripts/common.sh@366 -- $ decimal 2 00:46:25.291 13:07:51 -- scripts/common.sh@353 -- $ local d=2 00:46:25.291 13:07:51 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:46:25.291 13:07:51 -- scripts/common.sh@355 -- $ echo 2 00:46:25.291 13:07:51 -- scripts/common.sh@366 -- $ ver2[v]=2 00:46:25.291 13:07:51 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:46:25.291 13:07:51 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:46:25.291 13:07:51 -- scripts/common.sh@368 -- $ return 0 00:46:25.291 13:07:51 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:25.291 13:07:51 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:46:25.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.291 --rc genhtml_branch_coverage=1 00:46:25.291 --rc genhtml_function_coverage=1 00:46:25.291 --rc genhtml_legend=1 00:46:25.291 --rc geninfo_all_blocks=1 00:46:25.291 --rc geninfo_unexecuted_blocks=1 00:46:25.291 00:46:25.291 ' 00:46:25.291 13:07:51 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:46:25.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.291 --rc genhtml_branch_coverage=1 00:46:25.291 --rc genhtml_function_coverage=1 00:46:25.291 --rc genhtml_legend=1 00:46:25.291 --rc geninfo_all_blocks=1 00:46:25.291 --rc geninfo_unexecuted_blocks=1 00:46:25.291 00:46:25.291 ' 00:46:25.291 13:07:51 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:46:25.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.291 --rc genhtml_branch_coverage=1 00:46:25.291 --rc genhtml_function_coverage=1 00:46:25.291 --rc genhtml_legend=1 00:46:25.291 --rc geninfo_all_blocks=1 00:46:25.291 --rc geninfo_unexecuted_blocks=1 00:46:25.291 00:46:25.291 ' 00:46:25.291 13:07:51 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:46:25.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.291 --rc genhtml_branch_coverage=1 00:46:25.291 --rc genhtml_function_coverage=1 00:46:25.291 --rc genhtml_legend=1 00:46:25.291 --rc geninfo_all_blocks=1 00:46:25.291 --rc geninfo_unexecuted_blocks=1 00:46:25.291 00:46:25.291 ' 00:46:25.291 13:07:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:25.291 13:07:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:46:25.292 13:07:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:25.292 13:07:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:25.292 13:07:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:25.292 13:07:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.292 13:07:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.292 13:07:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.292 13:07:51 -- paths/export.sh@5 -- $ export PATH 00:46:25.292 13:07:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.292 13:07:51 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:46:25.292 13:07:51 -- common/autobuild_common.sh@479 -- $ date +%s 00:46:25.292 13:07:51 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734350871.XXXXXX 00:46:25.292 13:07:51 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734350871.bt63WM 00:46:25.292 13:07:51 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:46:25.292 13:07:51 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:46:25.292 13:07:51 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:46:25.292 13:07:51 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:46:25.292 13:07:51 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:46:25.292 13:07:51 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:46:25.292 13:07:51 -- common/autobuild_common.sh@495 -- $ get_config_params 00:46:25.292 13:07:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:46:25.292 13:07:51 -- common/autotest_common.sh@10 -- $ set +x 00:46:25.292 13:07:51 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:46:25.292 13:07:51 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:46:25.292 13:07:51 -- pm/common@17 -- $ local monitor 00:46:25.292 13:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:25.292 13:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:25.292 13:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:25.292 13:07:51 -- pm/common@21 -- $ date +%s 00:46:25.292 13:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:25.292 13:07:51 -- pm/common@21 -- $ date +%s 00:46:25.292 13:07:51 -- pm/common@25 -- $ sleep 1 00:46:25.292 13:07:51 -- pm/common@21 -- $ date +%s 00:46:25.292 13:07:51 -- pm/common@21 -- $ date +%s 00:46:25.292 13:07:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734350871 00:46:25.292 13:07:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734350871 00:46:25.292 13:07:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734350871 00:46:25.292 13:07:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734350871 00:46:25.292 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734350871_collect-cpu-load.pm.log 00:46:25.292 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734350871_collect-vmstat.pm.log 00:46:25.292 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734350871_collect-cpu-temp.pm.log 00:46:25.292 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734350871_collect-bmc-pm.bmc.pm.log 00:46:26.288 13:07:52 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:46:26.289 13:07:52 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:46:26.289 13:07:52 -- spdk/autopackage.sh@14 -- $ timing_finish 00:46:26.289 13:07:52 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:26.289 13:07:52 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:26.289 13:07:52 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:26.289 13:07:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:26.289 13:07:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:26.289 13:07:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:26.289 13:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:26.289 13:07:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:46:26.289 13:07:52 -- pm/common@44 -- $ pid=711045 00:46:26.289 13:07:52 -- pm/common@50 -- $ kill -TERM 711045 00:46:26.289 13:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:26.289 13:07:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:46:26.289 13:07:52 -- pm/common@44 -- $ pid=711047 00:46:26.289 13:07:52 -- pm/common@50 -- $ kill -TERM 711047 00:46:26.289 13:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:26.289 13:07:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:46:26.289 13:07:52 -- pm/common@44 -- $ pid=711048 00:46:26.289 13:07:52 -- pm/common@50 -- $ kill -TERM 711048 00:46:26.289 13:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:26.289 13:07:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:46:26.289 13:07:52 -- pm/common@44 -- $ pid=711075 00:46:26.289 13:07:52 -- pm/common@50 -- $ sudo -E kill -TERM 711075 00:46:26.289 + [[ -n 56510 ]] 00:46:26.289 + sudo kill 56510 00:46:26.326 [Pipeline] } 00:46:26.341 [Pipeline] // stage 00:46:26.347 [Pipeline] } 00:46:26.361 [Pipeline] // timeout 00:46:26.366 [Pipeline] } 00:46:26.380 [Pipeline] // catchError 00:46:26.386 [Pipeline] } 00:46:26.400 [Pipeline] // wrap 00:46:26.407 [Pipeline] } 00:46:26.420 [Pipeline] // catchError 00:46:26.430 [Pipeline] stage 00:46:26.432 [Pipeline] { (Epilogue) 00:46:26.445 [Pipeline] catchError 00:46:26.447 [Pipeline] { 00:46:26.460 [Pipeline] echo 00:46:26.462 Cleanup processes 00:46:26.467 [Pipeline] sh 00:46:26.813 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:26.813 711206 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:46:26.813 711550 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:26.826 [Pipeline] sh 00:46:27.111 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:27.111 ++ grep -v 'sudo pgrep' 00:46:27.111 ++ awk '{print $1}' 00:46:27.111 + sudo kill -9 711206 00:46:27.123 [Pipeline] sh 00:46:27.408 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:39.632 [Pipeline] sh 00:46:39.917 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:39.917 Artifacts sizes are good 00:46:39.932 [Pipeline] archiveArtifacts 00:46:39.940 Archiving artifacts 00:46:40.355 [Pipeline] sh 00:46:40.641 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:40.656 [Pipeline] cleanWs 00:46:40.666 [WS-CLEANUP] Deleting project workspace... 00:46:40.666 [WS-CLEANUP] Deferred wipeout is used... 00:46:40.673 [WS-CLEANUP] done 00:46:40.675 [Pipeline] } 00:46:40.692 [Pipeline] // catchError 00:46:40.704 [Pipeline] sh 00:46:40.989 + logger -p user.info -t JENKINS-CI 00:46:40.998 [Pipeline] } 00:46:41.012 [Pipeline] // stage 00:46:41.017 [Pipeline] } 00:46:41.032 [Pipeline] // node 00:46:41.037 [Pipeline] End of Pipeline 00:46:41.097 Finished: SUCCESS